What is Technical SEO and Why is it Important?

Throughout the next two lessons, we’re going to be talking about technical issue and technical. So is the process of optimizing your website to help search engines find, understand and index your pages. Now, for beginners, technical SEO doesn’t need to be all that technical. And for that reason, this module will be focused on the basics so you can perform regular maintenance on your site and ensure that your pages can be discovered and indexed by search engines. Let’s get started. All right. So let’s talk about why Technical SEO is important at the core. Basically, if search engines can’t properly access, read, understand or indexed your pages, then you won’t rank or even be found, for that matter. So to avoid innocent mistakes like removing yourself from Google’s index or diluting a page is backlists, I want to discuss four things that should help you avoid that. First is the noindex MEDITECH. By adding this piece of code to your page, it’s telling search engines not to add it to their index. And you probably don’t want to do that. And this actually happens more often than you might think. For example, let’s say you hired Design Inc to create or redesign a website for you during the development phase. They may create it on a subdomain on their own site.

So it actually makes sense for them to noindex the site they’re working on. But what often happens is after you’ve approved the design, they’ll migrated over to your domain, but they often forget to remove the matter, no index tag. And as a result, your pages end up getting removed from Google Search Index or never making it in. Now, there are times when it actually makes sense to noindex certain pages, for example, are authors’ pages are no indexed because from an SEO perspective, these pages provide very little value to search engines. But from a user experience standpoint, it can be argued that it makes sense to be there. Some people may have their favorite authors on a blog and want to read just their content. Generally speaking, for small sites you won’t need to worry about no indexing specific pages. Just keep your eye out for no index tags on your pages, especially if after a redesign. The second point of discussion is robots that text robots. That text is a file that usually lives on your route domain and you should be able to access it at your domain dot com slash robots that text. Now, the file itself includes a set of rules for search engine crawlers and tells them where they can and cannot go on your site. And it’s important to note that a website can have multiple robots files if you’re using subdomains. For example, if you have a blog on domain dotcom, then you’d have a robot stuck text file for just the route domain. But you might also have an e-commerce store that lives on Staudt domain dot com, so you could have a separate robots file for your online store. That means that crawlers could be given two different sets of rules, depending on the domain they’re trying to crawl. Now the rules are created using something called directive’s. And while you probably don’t need to know what all of them are or what they do, there are two that you should know about from an indexing standpoint. The first is user agent, which defines the crawler that the rules apply to. And the value for this directive will be the name of the crawler. For example, Google’s user agent is named Googlebot and the second directive is disallow. This is a page or directory on your domain that you don’t want the user agent to crawl. For example, if you set the user agent to Googlebot and the disallow value to a slash, you’re telling Google not to crawl any pages on your site. Not good. Now, if you are to set the user agent to an asterisk, that means your rules should apply to all crawlers. So if your robots file looks something like this, then it’s telling all crawlers, please don’t crawl any pages on my site.

While this might sound like something you would never use, there are times when it makes sense to block certain parts of your site or to block certain crawlers. For example, if you have a WordPress website and you don’t want your admin folder to be crawled, then you can simply set the user agent to all crawlers and set the disallow value to HWP admin. Now, if you’re a beginner, I wouldn’t worry too much about your robots file. But if you run into any indexing issues that need to be troubleshooting robots, that is one of the first places I’d check. All right. The next thing to discuss are site maps. Site maps are usually XML files and they list the important U. URLs on your website. So these can be pages, images, videos and other files and site maps helps search engines like Google to more intelligently crawl your site. Now, creating an XML file can be complicated if you don’t know how to code, and it’s almost impossible to maintain manually. But if you’re using a CMS like WordPress, there are plugins like Yoast and. Rank math, which will automatically generate site maps for you to help search engines find your site maps, you can use the site map directive in your robots file and also submit it in Google search console. Next up are redirects a redirect, takes visitors and bots from one new URL to another, and their purpose is to consolidate signals. For example, let’s say you have two pages on your website on the best golf balls, an old one at domain dot com best golf balls, twenty eighteen and another at domain dot com best golf balls. Seeing as these are highly relevant to one another, it would make sense to redirect the twenty eighteen version to the current version. And by consolidating these pages, you’re telling search engines to pass the signals from the redirected you URL to the destination URL. In the last point I want to talk about is the canonical tag. A canonical tag is a snippet of HTML code that looks like this. Its purpose is to tell search engines what the preferred URL is for a page, and this helps to solve duplicate content issues. For example, let’s say your website is accessible at both http Colen double your domain dot com and http s Colen double your domain dot com. And for whatever reason you weren’t able to use a redirect. These would be exact duplicates. But by setting a canonical U. URL, you’re telling search engines that there’s a preferred version of the page. As a result, they’ll pass signals such as links to the canonical U. URL so they’re not diluted across two different pages. Now, it’s important to note that Google may choose to ignore your canonical tag. Looking back at the previous example, if we set the canonical tag to the unsecure HTTP page, Google would probably choose the secure HTTP s version instead. Now, if you’re running a simple WordPress site, you shouldn’t have to worry about this too much. CMS are pretty good out of the box and will handle a lot of these basic technical issues for you. So these are some of the foundational things that are good to know when it comes to indexing, which is arguably the most important part in SEO, because again, if your pages aren’t getting indexed, nothing else really matters. Now, we won’t really dig deeper into this because you’ll probably only have to worry about indexing issues if and when you run into problems. Instead, we’ll be focusing on technical SEO best practices to keep your website in good health. And that lesson will be published later on this week.

Leave a comment

Your email address will not be published. Required fields are marked *