PPC Service Lifecycle

1 Setting Up New Campaigns

1.1 Researching the Brand

Once our Operations Specialist has collected payment and all required access, we start working on the new campaigns. We begin by reviewing the key information you provide via the Service Request form in Pivot. This includes things like your client’s primary objective, a profile of their ideal client or customer, and their preferred geotargets / ad schedule. This helps us to understand their PPC goals.

We’ll also explore their website and any other creative assets, paying attention to the way they present their brand and offerings. If necessary, we will look at competitor websites to gain further clarity. Clients with historical PPC data from old campaigns bring a wealth of opportunities to the table, and we always make it a point to review what has and hasn’t worked well in the past–kind of like a mini-audit. We use this information to determine our strategic options for reaching their goals.

Once we understand their goals and have some ideas, we start researching the targeting for the new campaigns. For paid search (Google Ads & Microsoft Ads), we start digging into Google’s Keyword Planner to identify the terms with the highest intent, and we also keep an eye out for things to avoid. For paid social, we dive into Facebook or LinkedIn’s Audience Insights tool, respectively, to evaluate the size and interest of various user pools. We also reference historical data from our other clients in similar industries and leverage experience from managing and optimizations hundreds of campaigns.

1.2 Segmenting Components

We try to keep our campaign naming conventions standardized so that the logic is easy to understand across our various accounts. However, the structure of the campaigns often varies from one client to the next. For instance, we might organize campaigns by:

  • Product categories (e.g. shoes, shirts, hats)
  • Service types (e.g. lawn maintenance, tree maintenance, shrub maintenance)
  • Ad networks and placements (e.g. display, video, retargeting, apps)
  • Audience types (e.g. lookalike, interest, retargeting, in-market)
  • Conversions, transactions, demographics, geotargets, schedules, or budgets

Within the campaigns, we organize the ad groups or ad sets by things like:

  • Similar keyword clusters (e.g. standing desk mats and standing desk pads, customers looking for a person or for a service would be separated)
  • Specific creative or ad types (e.g. all ads with the same video or image, text ads vs. display images)
  • Various campaign intentions (e.g. lead generation, eCommerce, branding, retargeting, lookalike)

We try to balance the desire to improve personalization through segmentation with the desire to increase data collection by grouping similar items together. Along the way, we identify outliers that do not seem to fit in and evaluate whether or not we want to target those options. Sometimes it makes more sense to exclude them and focus on options that generate more meaningful volume. We’re also always on the lookout for anything that should obviously be excluded, and we have a number of other exclusions that we apply based on data we’ve gathered from similar campaigns in the past.

1.3 Customizing Ad Copy

Once we have the campaigns and ad groups / sets ready to go, then we will build ads. We usually start with a narrow A/B test and then expand to introduce new elements once the volume and data increase.

On paid search, the character limits are fairly narrow, so we often write copy that is designed to resonate with specific emotions and then test different emotions to see which resonate best with the audience.

On paid social, the copy can be much longer, so we may test direct copy that leads with the offer against creative or light copy that tells a story or appeals to their sense of humor. We try to avoid using stock images and prefer branded creative like videos, talking heads, or unique images that stand out (e.g. organic images of the business). We don’t want the creative to look too much like a boring ad.

Sometimes we use similar copy in multiple places because that may help neutralize differences between campaigns and ad groups / sets. This helps us discover a more accurate representation of the impact that the actual ad copy has, versus confusing it with other settings that may vary. We make it a point to share our finds so that your clients can benefit further (e.g. applying PPC insights to copy for print ads).

1.4 Setting Automated Bids

As more and more ad publishers focus their efforts on automated bidding, we try to balance control with scalability for our clients. On paid search, we’ll usually start out with a manual bid to control the spend initially, and then we’ll switch to test automated bidding methods once there is enough conversion data to feed the system. Paid social tends to be more automated, so we often let the system determine whether it thinks highest value or lowest cost is the best strategy for improving cost per click. Either way, we’re regularly evaluating performance to stay on track. Some automation is always inevitable.

1.5 Configuring Conversion Tracking

Google Tag Manager is our tool of choice. We have container recipes for each of our ad publishers, and we add custom tags to track things like unique forms. We’ll make sure that all the tags, triggers, variables, and IDs are properly configured. Our goal is to track every conversion and ensure complete attribution for all the leads and revenue the campaigns generate. We often find room to improve on old accounts.

1.6 Previewing Ads for Launch

If requested on the Service Request form, we will provide an opportunity for you (and your client) to review the ad copy we have drafted. We allow up to two rounds of revisions, and then we are more than happy to allow you or the client to make your own changes. Once the build is approved, then we launch the new campaigns and watch to make sure that the ads are approved and start generating new traffic. It is often helpful to have a quick conversation during this process, just to make sure we are all on the same page with expectations for the early days of management.

2 Optimizing Existing Campaigns

2.1 Optimizing Performance

While many companies have invented PPC management software that is supposed to do everything, we prefer manual optimizations supported by scheduled changes (e.g. when a campaign is supposed to turn off). This provides maximum control over the results, and it equips our team to provide answers and solutions in ways that people relying completely on automation can rarely match. We provide support for our partners on all business days, but we plan specific days for optimizing and changing each account. We may postpone certain changes to allow enough data to accrue first. Overly frequent changes often lead to jumbled data where it is hard to pinpoint reasons for various performance trends.

2.2 Managing the Budget

The first and most important thing we check during each optimization is the way the campaigns are trending in ad spend. We want to know whether the current rate of spend will put us above, below, or at the provided monthly budget. We set campaign limits that are slightly below the actual budget just to ensure that spikes in spend due to automated changes do not cause any accidental overspend.

The following items (2.3 – 2.8) are various optimization techniques we employ as necessary.

2.3 Checking Campaign Reach

This applies to all campaigns, even if they have been running for a while. We always check recently launched campaigns since those are the most susceptible to variable volume, but changing ad policies and seasonal fluctuations mean that we also have to watch established campaigns. If we notice that a campaign is spending too much or too little, or if any of the reach metrics seem to become inflated or deflated, then we try to identify the cause for the change and brainstorm any counter-changes.

Here are a few examples. If the campaigns are limited by budget, we’d want to see whether a particular component or setting is the primary cause. If the campaigns are underspending the budget, we might expand the targeting in one or more areas. If the search impression share seems too high or too low, then we may try to compensate with adjustments to the settings and bids. If the ad publisher provides an optimization score, we evaluate reach recommendations based on client goals. We also check these kinds of things at the ad group or ad set level, and even at the ad, audience, and keyword level too.

2.4 Improving Ads and Traffic

We monitor various metrics to observe the impact that different ads have on the overall performance of the campaigns. Two primary metrics are click through rate (CTR) and conversion rate (CVR). Sometimes we leverage best practices, but often these metrics have to be evaluated in light of specific client goals.

For instance, a high CTR is ideal for most clients, but a lower CTR may actually be better for a client where the traffic is searching for similar but undesirable services and can only be filtered by ad copy designed to qualify site visitors. On the other hand, both of these metrics and others like cost per click (CPC) or impressions (Impr) will vary based not only on the ad copy but also on the settings and targeting of the campaign. On paid social, we may also leverage tactics to stack social proof on specific ads by working with them across multiple campaigns or ad sets, and we also monitor things like how often a particular user sees the same ad to avoid ad fatigue. Overall, we work hard to trace performance trends back to the source and understand what changes to which components produce the greatest impact.

2.5 Boosting Return on Ad Spend (ROAS)

This objective is the second most important, after ensuring that we are spending the right amount. A budget severely underspent mitigates our ability to collect data about ROAS, and a budget overspent undermines credibility. When the spend is correct, then we aim to ensure maximum effectiveness.

In one sense, everything we do in the account works toward this goal. However, we also get granular with things like tracking conversion value and drilling down on high CPL campaigns or ad groups. We try to identify the specific source of the conversions and allocate more budget there while we reduce spend on segments that are laggards. We also compare certain metrics over various periods of time to see if we can pinpoint specific times when the performance trends changed so we can identify a cause.

2.6 Adjusting Segmentation

One way we adjust data is by increasing / decreasing our segmentation. If we find that some areas of an account are producing spotty data, we may merge them together to increase the density. Conversely, if some areas are producing voluminous data, then we may break them up to evaluate components more carefully. As noted in the campaign build section, we also tend to organize high-level segmentation based on certain themes, and we may decide at times to change the organization for these reasons.

2.7 Updating Bidding Methods

All of the ad publishers are investing significant R&D resources into automated bidding. While decreased control for advertisers raises inevitable concerns about ad publisher trustworthiness, it is also important for your client to make the best use of available options. For paid search, we often start campaigns with manual bids to control the initial volume, but for all services, we find many instances where automated bidding produces superior results. There are several types of automated bid strategies, and so we often test them against each other to see which one(s) produce the best results for each campaign / client.

2.8 Testing New Strategies

While most of the conversation regarding optimizations is about making changes, there is much to be said for keeping things that work well. Sometimes we apply an 80/20 approach, where we maintain 80% of the settings and components in an ad account while testing 20% to see if we can find ways to beat the 80%. If we find something that works, then we gradually roll it out. Similarly, if something within the 80% seems to be exceptionally bad or good, we may break it out so that we can experiment with more targeted settings. This often involves campaigns, ad groups, keywords, ads, and more.

3 Supporting Client Relationships

3.1 Managing Responsibilities

Overseeing the client relationship is one of two major responsibilities we expect from our partners (the other being acquisition). However, we recognize that the expertise we bring to the table is a crucial ingredient in our white-label partnership, so we offer several types of support to our partners to help them manage the client relationships. This preserves the white-label nature of the partnership.

3.2 Scheduling Monthly Meetings

A few days before the end of each month, we remind our partners to schedule a meeting with their account specialist(s) to review their active service(s). We review service information beforehand and make sure things are ready in the event a screen-sharing session is required. During the meeting, we walk through high-level performance metrics and recent trends in the ad account, explaining changes and inviting feedback. We may include questions about things like lead quality and client relationships before wrapping up with a recap and a written summary in the Twist thread for the meeting. These meetings are designed to highlight service value and determine strategy for the next month.

3.3 Sending Strategy Reports

During the first week of every month, we will provide a written summary of performance metrics and recent trends in the ad account. We will also provide explanations for changes and ideas for future strategy, or at least notes about results in existing strategies. We aim to align formatting across our reports, but the content of each update is original and specific to the service(s) addressed. We encourage client questions and rely heavily on our partners to keep us abreast of important updates in the client relationship. If merited, we will make recommendations about budget changes (either up or down).

3.4 Providing Sales Support

If support is needed during the sales process, our partners can request Sales Support. We provide Account Audits for prospects that have already run ads on the ad publisher of choice, or Market Analyses if they are thinking about trying it for the first time. The free versions of these services provide minimal information, but the paid versions encompass a wider range of material and revenue projections. Any fees collected for sales support are applied as credits against onboarding if the client moves forward.

3.5 Messaging in Twist

Lastly, we provide message support in Twist during business days. Our partners have direct access to the specialists managing their services (no Account Managers in between), so it is simple to start a thread and expect a response from the assigned specialist. We aim to provide a response within a maximum of one business day, and many responses are same day. If the request is urgent, we ask our partners to call us before messaging in Twist. We also prefer that our partners use our Change Request form if they need something to be updated in the ad account (to ensure that it is added to our specialists’ task lists).