Just the Facts, Ma’am – Just the Facts.

Updated March 8, 2011

Doc Sheldon

2010 brought a lot of changes to the Internet world, in terms of search, and 2011 is giving every indication of bringing even more. Local, social, mobile, personalized… all are taking on added importance, and those that work even on the fringes of SEO (search engine optimization) are as confused as ever about what signals are being most observed.

Theories, of course, abound, as always. Some are reasonable, and based on “testing” – and others are just brain-farts, in a knee-jerk effort to jump on the bandwagon. The trick, of course, is to sort out which is which.

Typically, we do this by comparing what we know, what we think, and what is proposed, and deciding whether we believe what we see, accept it as a reasonable possibility or reject it entirely. This process is anything but scientific, however, because our emotions often enter into the equation.

Rare is the individual that can truly look dispassionately at a hypothesis that challenges everything he has previously believed in, and accept it as a viable alternative.

That is one of the reasons that highly structured scientific testing renders the most reliable results. However, there are better reasons.  The key is to follow the rules. Religiously!

Ideally, you want to isolate all the variables. Hey… I did say, ideally! But in practice, outside of a laboratory, that’s rarely possible. In the SEO world, I won’t say it’s impossible, but for all practical purposes, it might as well be. So what’s the next best thing?

Compensate equally across the entire field for each variable you aren’t testing for. That isn’t always easy, either, but if you can achieve it, you give yourself a better chance of some meaningful findings. You’ll have to take the deviations into consideration in that compensation, and if the deviations are high, it may not be entirely effective to try to compensate.

As an example, say you were testing for the optimum settings for a plastic injection molding process. You identify the three major variables as time, pressure and temperature. Initially, you’ll want to hold two of the three steady, while varying the third and recording the results. You’ll then repeat that process for each of the three factors.

Once you have completed that, you’ll then maintain one of the factors steady-state, while going through a range of variations with the other two. It’s possible that you’ll see some results here that puzzle you, as two related changes made simultaneously may possibly give a result that neither of the two changes would, if made individually.

Finally, you may go  through a run changing all three variables, looking for more strange indications caused by the combination of varied inputs. Depending upon how many criteria you are monitoring this could be the most lengthy process yet. You can see, I’m sure, that you might be scurrying trying to keep tabs on all the readings, through that many iterations.

Increased temperature and reduced time will have one effect on the finished part, while adding additional pressure would have the effect of reversing some of that effect. Yet increasing pressure without increasing time and temperature would have the opposite effect, with some plastics. Thus, it’s very important to carefully record all results through several runs of all possible variations, in order to see any meaningful results, particularly in terms of deviations.

DOE (design of experiments) can be very helpful in analyzing the effects of multiple factors on the outcome. Simultaneously manipulating multiple inputs can identify critical interactions that could otherwise go unnoticed. DOE is the best approach to testing when it is suspected that more than one input variable might affect the outcome.

You can also use DOE to confirm suspected relationships of inputs/outputs and develop suitable what-if analyses. This is probably the most common use of DOE within the SEO community, since isolation of multiple inputs is extremely difficult.

One daunting aspect of building your DOE construct can be the number of iterations required. If, for instance, you intend to test for three factors to two levels, looking at 100 repetitions, you will be looking at 800 different data points {(2^3) x 100}. As you can imagine, if you’re considering 6 factors to 10 levels, and want to check 5,000 repetitions, the 302,330,880,000 data points might be a little intimidating to gather.

And then, of course, would come the analysis…

Fortunately, there are resources available to aid in the process. One obvious technique would be to hire a statistical analyst to do the study for you. Costly and time consuming as that would probably be, you may prefer to try to do it yourself.

If you’re conversant in Excel, you can build a spreadsheet to manipulate the data, and even take advantage of the built-in what-if analysis functions. Or, if your testing involves no more that 2 levels and 3 factors, you can choose to utilize this existing downloadable Excel template from iSixSigma. Others exist, which are free of charge, and can be tailored for your needs.

The bottom line, if you haven’t guessed already, is that testing isn’t really testing, unless it’s properly constructed, conducted and analyzed. Simply trying to find evidence that your theory is (or may be) correct, is one of the surest ways to screw up the process.

If you can’t maintain a totally dispassionate attitude toward your testing and the outcome (and really, how many of us can?), then the best approach is often to set out to disprove your theory.  That at least helps prevent you from unconsciously reading into your results, something that isn’t justified by the facts.

Since it is supposed that you will be implementing SEO tactics according to your findings, you want to be sure that your findings are reliable. And if you’re an SEO, your credibility should be of paramount importance, so don’t blog about your “test results”, if they won’t hold water.

 

Facebooktwittergoogle_pluslinkedin