3. What options do we have when it comes to controlling which pages are indexed/crawled?
The third thing we'll talk about is what some of the options we have in terms of controlling some of these, so controlling whether a page is indexed or crawled, etc. I won't go into the details of each of these today, but I have a blog post on the subject that we'll link to below.
The main, most common options we have to ukraine number data this kind of thing are to not index a page and prevent Google from indexing it, using canonical tags to basically select the canonical version of a page , to prevent Google from crawling a particular part of the site.Using a permissive rule in robots.txt , or even using the nofollow meta directive . These are some of the most common options. Again, we're not going to go into the intricacies of each. Each of these has its own set of advantages and disadvantages, so you can research it yourself.
So, okay, we know all this. What would be an ideal solution? Before I jump into it, I don't want you guys to run to your bosses and say, "This is what we need to do."
Please, do your research beforehand because this is going to be very different based on your site. Depending on the dev resources you have, you will have to be efficient with this. Also, do some keyword research primarily around the long tail . There are many instances where you can and want to set up three or four aspects.
So again, a big caveat: this is not a solution. This is something we have sometimes, when appropriate, recommended to clients. So let's jump into what an ideal solution, or not an ideal solution, a potential solution looks like.
Category, subcategory, and subsubcategory pages are open for indexing and crawling.
What might a solution look like?
-
- Posts: 250
- Joined: Tue Jan 07, 2025 4:42 am