-
-
Notifications
You must be signed in to change notification settings - Fork 322
On interviewing "non engineers" when you're an engineer [Red Flags]
I interviewed hundreds of "founding marketer / go-to-market / business" hires, for the previous companies I worked for, or the dozen early-stage startups I invested in, or for Lago, the company I co-founded.
Here are the most common traps technical founders should not fall into, when they need to interview a "business candidate". The title can vary, but it's generally the first non-technical person, in charge of supporting the founders around go to market, customer success, finance, etc.
In other words, if a candidate or vendor sells one of these topics to you as the solution to a burning issue, in 95% of cases it’s a red flag.
If you’re earlier than series B, and have no idea how to get press exposure, a PR agency alone won’t help. Their retainers usually start at $10k/month, so costs can increase quickly.
PR agencies are usually bad at crafting new angles, iterating on your positioning, and being very proactive. They might help at a later stage to amplify your reach, when you’re established and your messaging is stabilized. But even at this point, you will need 10-20% of a full-time employee to drive them.
If you don’t know where to start: reach out to internal ‘brand and communication leads’ at companies that are in your industry, one or two steps further (if you’re at seed level, learn from someone at a series A startup). They are the ones who actually do the work, and from whom you can learn.Read more here
The point is: at an early stage, your first business hire should be generating attention, page views, PR (if it's a good channel for you) because the approach is creative, not by outsourcing this to an agency. After having worked with the "self-proclaimed best agencies", they might help get your fundraising announcement out on Techcrunch (you actually don't need them for that), but won't help you build the muscle to be consistently featured.
Some ex-consultants love this topic and want to bring it in, when they join a startup. I’ve worked at McKinsey myself, so I can relate. They can live in a world of concepts and theory, instead of getting down to the nitty-gritty of operational tasks.
What is ‘attribution’?
Attribution models are a way to ‘attribute’ a conversion (usually a sign-up or a sale) to one or several marketing channels: a TV campaign, a Facebook ads campaign, etc.
For instance, if one of your leads interacts with a Facebook ad, then with a Google AdWords ad, then talks to a Sales Executive during an event, and finally signs up after clicking on a LinkedIn ad, how do you determine the influence of each channel on the sign-up?
Is it 100% Facebook? (first touch) Is it LinkedIn? (last touch) Is it equally distributed? (¼ for each touchpoint) The list can go on.
Defining and implementing a custom attribution model takes a lot of debating (debating = time = money) and engineering resources to implement (it’s a data engineering project).
Here’s the early stage mindset to get s*** done you should look for:
a) Keep your global cost of acquisition (CAC) under control, with a simple ratio: number of new conversions / global acquisition spend.
b) Identify ‘no-brainer’ actions: actions that are needed, and that no analysis or complex attribution model will deprioritize.
There are many other topics you can address:
• What does your first interaction with leads look like? Can it be improved? I’m thinking: how hard have you worked on your landing pages’ conversion rates? On your cold email copy? On the SEO ranking of your most viewed pages?
• How about the last touch? Have you tweaked your sales script and tested it? Have you tested different copies of your AdWords ad?
• What’s the satisfaction rate of people in contact with your Sales team at different stages of the funnel?
• If you organize events, what’s the NPS of attendees?
• Do some of your new users drop during onboarding and never come back? Have you tried to fix this?
• What does your activation rate look like after onboarding? Have you defined it and are you monitoring and improving it over time?
✅ Pros:
You can identify no-brainers, focus your efforts on direct and high-impact projects, while keeping your global cost of acquisition under control (i.e. you know you don’t spend more than X€ to acquire a new customer, regardless of which channel contributes most, and that’s ‘good enough’).
❌ Cons:
You don’t have an exact or sophisticated attribution model, and visualization of leads’ journey. Doesn’t matter if you grow and your CAC is under control.
A/B testing is fascinating, but it's great for PhDs, not for early-stage startups watching their runway. Why?
What are the prerequisites for a successful A/B test?
-
A large enough testing sample: basically, you should have your main user base, a testing sample for version A, and a testing sample for version B. If your testing samples aren’t large enough, it’s just not statistically significant. In many cases, companies don’t have a large enough testing sample. I’ve seen vendors selling A/B testing for pricing to seed stage companies.
-
You should have a very specific topic to test: if your version A differs completely from version B, and the test results don’t show a real difference in performance, you won’t even know what to learn from this; and
-
You should have enough time ahead of you: to define two versions, implement them in the front-end and in the back-end, brief your team, and then wait for the feedback loop to complete. If you’re testing pricing and your sales cycle is two months, you may need six months to complete a test: one month of preparation, four months to have a few cohorts, and a few weeks to analyze the results.
Can you afford to wait and spend so many resources? Does someone on your team have the data, marketing and project management skills to lead this? Can you staff engineers and product managers on this?
In most cases, you don’t. I’ve only seen very late stage companies actually using A/B testing in their product.
How do others do? They use surveys, polls, interviews, ‘educated guesses’ and optimize on iteration speed. Is it 100% scientific? Not really. But in a high ambiguity environment, are your chances of success higher if you iterate continuously during six months, or if you design only two options and wait for six months?
That’s why A/B testing is great for use cases with a quick feedback loop, such as testing an email subject line or a landing page for an ads campaign, not for the rest. And not for pricing.
Instead, if you need help with your pricing, feel free to reach out.
- Anh-Tho