Special offer

Calling it by the real name in Dunedin Florida

By
Real Estate Agent with ReMax Realtec Group

It may be overdue for us to start calling things by their real name.

Sitting here at Casatina in quaint downtown Dunedin Florida. $6 Margarita delivered and decided to review a few case studies preported to be 'ground breaking' in what the reveal.

The first four case studies align perfectly with these three things I have shared before:

  1. Success Theater 
  2. Vanity Metrics
  3. Surrogate Endpoints.

I'm going to debunk these “valuable” tests and expose the dirty laundry which includes…

  • Overhype (Success Theater)
  • Manipulated data  (Vanity Metrics)
  • False conclusions  (Surragate Endpoints)

Actually you may benefit more by understanding the measures I use to separate the bull from the...you know. What I am talking about is what marketers are selling agents 24 hours a day, 7 days a week 365 days a year.

COVERSION RATES!

I’ve evaluated many, maybe hundreds of conversion rate optimization case studies in the past 4 years..

My technology background has made me pretty darn good at sniffing out the good ones, the ones I would apply to my web properties. And, I’m equally good at sniffing out the posers that are peddling weak information that, if implemented, could damage my site and your site.

 

Please be aware, as real estate professionals we often create statistics for the individual cleint or to post on public forums. These eight points are a proper litmus test, your secret weapon to ensure your data passes the sniff test.

 

The eight points I use to evaluate studies.

  1. Is the sample size available? Usually it is missing, that means the sample size is too small to be statisticaly relevant. (My magic data point number is 13.)

  2. What is the lift percentage?
    Properly contextualize lift percentage.
    When someone says there was a 30% lift in conversions they aren’t saying they went from a 10% conversion rate to a 40% conversion rate. They went from a 10% conversion rate to a 13% conversion rate!
    Please don't create FALSE expectations.

  3. Are the raw number of conversions published? 
    I am flexible on this one. Some believe such data could provide competitors an advantage. If you are unable to share the raw conversions, then don't publish the study. Don't share studies with lift percentages that look unbelievable. The burden of proof, in my opinion, is on the person or organization publishing the study. When you receive such information from other organizations, you can understand what is MISSING.

  4. IS the conversion metric listed? Good user interface makes the primary metric abundantly clear. Now this is a real pet peeve of mine.
    I've seen plenty of studies where they just say "20% conversion lift!"
    Exactly WHAT metric was lifted? Was it a 20% lift in sign-ups? Subscribers lifted 30%? 10% lift in clicks? Views? Impressions?

  5. Is the confidence rate published? I like to see a minimum of a 95% confidence rate. If I see one below 90% that inspires me to move on. To be frank, that is why confidence rates are not published.  
            
  6. Is there a test procedure? What is it?
    There must be a reason for testing.
    For every test run there are an infinite number of other test possibilities. IT IS important to test the RIGHT things for the RIGHT reason. This requires more than a guess and check approach. If the study does not share how they came up with the ideas for the test or how they implemented the test, then YOU REALLY CAN'T LEARN MUCH.
    Context is everything. Without context the study is just idea fodder showing you things you could possibly test.
    Here are a few things to look for or include in your procedure:
    * Why that metric was selected (conversion, website traffic, PPC)
    * The traffic segment being tested (new visitors, past clients)
    * Why those elements were selected (examples: call to action, subscribed to newsletter, request an evalution, or even the color of CTA button)

    * The test hypothesis

  7. Is the conclusion justified by the data or is it just hyperbole
    Way too many studies will jump to some major conclusions unjustifiably. Such JUMPS are what I affectionately refer to as SURROGATE ENDPOINTS
    The only time one can attribute a change to a single element is if that element is isolated. If more than one thing changes, for the love of Peter, don't attribute the lift to the element you THINK/hope caused the change.
    If  you are going to test an offer, don’t test the design. If you are going to test design, don’t test an offer.

     

  8. What is the test time-line?
    Tests require the perfect balance of time for their results to be taken seriously. If a test is called too soon it may not take natural variance between days into consideration or report on only a portion of the buying cycle. That would invalidate the results.
    For example, if the test was to determine the number of validation points accessed before conversion, sufficient time must be allowed for at least five validation incidences.
     
    If the test runs too long, seasonal variances could impact results. So, to find a reliable middle ground here are my guidelines:

    * Don't run a single test for less than 7 days or more than 6 weeks.

    * All tests must complete the week. If the test starts on Wednesday it needs to end on Wednesday.


    The time line is extremely important and so is the time of year.

    To sum up the spot check, ask these question before sharing a study or publishing one for yourself on your website:

 

1. Did I/they publish total visitors?

2. Did I/they share the lift percentage correctly?

3. Did I/they share the raw conversions? (Does the lack of raw conversions hurt my case study?)

4. Did I/they identify the primary conversion metric?

5. Did I/they publish the confidence rate? Is it >90%?

6. Did I/they share the test procedure?

7. Did I/they only use data to justify the conclusion?

8. Did I/they share the test time-line and date?

A case study that has more no’s than yeses is a dud.

 

The Problem: Due to our/my love for case studies we/I…

  • consume them
  • share them
  • try to implement the variants on my own site without regard to their validity

This needs to stop.
The optimization industry will look like digital snake oil salesman when business folks, like us, become aware of the behavior and consequences to our business and those in our audience. When Success Theater, Vanity Metrics and Surrogate Endpoints  are propagated they make everyone associated with them look less professinal.

These principles will be modeled when I publish my case study on the accuracy of the CIS Score home valuation method.  In the study the 4% accuracy claim of the "CIS SCORE value vs sold price" will present the 8 requirements described above and allows the readers conclusion to resonate with the data.

This outcome I wish for this blog effort is to challenge real estate professionals to use real data when they proclaim 'Sell faster and for top dollar!' or "I'm #1!" The result will be a professional who will discover other dimensions of their business of which they were unaware AND the unshakable confidence in their system, resource and process.

The secondary outcome is that every real estate professional challenge those who peddle conversion/SEO snake oil to prove their numbers.

 

Best of success,
Annette Lawrence, Broker/Associate
Remax Realtec Group
Palm Harbor, FL

727.420.4041

 

 Author of:
GET ON THE RIGHT TRACK - BREAKING THE FAILURE SYSTEM

LULU.COM

 

 

 

 

Comments(3)

Raymond E. Camp
Ontario, NY

Good morning Annette,

Just like common core math no one wants to take the time nor do the math and understand what they are actually looking at.

Make yourself an astonishing day.

Jul 04, 2016 11:16 PM
Kat Palmiotti
eXp Commercial, Referral Divison - Kalispell, MT
Helping your Montana dreams take root

These questions are great for ANY use of claims from data, not just conversion rates. Almost every time I hear an advertisement or "news" story that says x% of something did something, I say, "but what does that MEAN?" The unemployment rate is 30%, for example. Well, how did they measure unemployment, and who did they include, and when did they take this measurement, and how does that measure against the last time they took the test and on and on. I tend to not believe any claim that doesn't have fully transparent data to back it up.

Jul 05, 2016 02:26 AM
Kat Palmiotti
eXp Commercial, Referral Divison - Kalispell, MT
Helping your Montana dreams take root

By the way, sitting outside with a cold margarita sounds like a wonderful place to be thinking!

Jul 05, 2016 02:27 AM