5 Common Technical Problems That Limit SEO Success

Technical SEO is often overlooked, but it critical to a website’s ranking success. You see, Googlebot is programmed to obey specific commands—it can only crawl links that are properly coded and ignores certain elements. Thus, technical oversights can negatively impact Google’s ability to crawl, index, and rank it a site. Here’s a list of five of the most common technical errors that might be causing your website to rank lower.

Cookie-based content

Do you put cookies on your visitors’ web browsers to personalize their experience? This can be keeping you from attaining the rankings you want.

Let’s say for example that you run an ecommerce website that uses cookies to serve content in different languages. When a visitor comes in and chooses to read in German, for instance, your site sets a cookie and the rest of that visit proceeds in German. The URL stays the same, but the content served is different.

Perhaps you are hoping that this strategy will help you rank in German organic search and bring Deutsch-speaking customers to your online store. But it won’t. Why? Because content that your visitors access due solely to cookies (instead of clicking links) are not accessible to Googlebot. When the content changes but the URL remains the same, Google is unable to crawl the alternative versions.

Location-based pages

Do you have locale-adaptive pages that detect your visitors’ IP addresses and displays content based on their locations? It might be time to rethink that approach. After all, IP detection is not foolproof. The IP address of a person who is actually in New York City might appear to be in Boston, for example. That visitor will therefore be served with content specifically for Boston, which is irrelevant and useless.

Think about it: The default IP address of Googlebot’s is in San Jose in California. This means that your website will serve Googlebot content related to that area. This is an invisible impediment to SEO that’s hard to detect.

So should you completely get rid of location-based content? They might be okay upon first entry into your website, but not for subsequent content. Look at the links clicked—not the IP address.

Uncrawlable JavaScript links

In the eyes of Google, a link should contain two elements: (1) an href to a particular URL and (2) an anchor tag. Google also likes anchor texts because they demonstrate the relevance of the linked pages.

The problem is that many ecommerce sites—perhaps yours included—use Javascript links instead of plain HTML links in anchor tags. These mouseover dropdowns might work well for human visitors, but Googlebot doesn’t see them as ‘crawlable’ links. They cause indexation problems that ultimately hurt your SEO.

Robots.txt disallow

The robots.txt file is supposed to tell bots what pages to crawl via the disallow command. However, disallow commands don’t really prevent indexation. Worse, they can render bots unable to determine a page’s relevance, thereby preventing that page from ranking.

If you see a sudden drop in your organic search traffic after a redesign or a similar major change, be sure to check for disallow commands in the robots.txt file. They may have been put there on accident, blocking Googlebot from crawling your entire website.

Hashtag URLs

While most SEO technicians understand that hashtag URLs can cause indexation issues, a majority of online marketers don’t. They are surprised to learn that this URL structure can cause low rankings. The problem is that hashtags often don’t reproduce the intended content on succeeding visits. If Google indexed a hashtag URL, the content may be different from what searchers were looking for.

In the many years that we’ve been doing site audits, we’ve seen so many technical problems on the websites of new clients. Do you suspect you have the same issues? Contact us at SEOValley today so we can check your site and help you move forward.