SE Ranking reported errors and fixes

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.
3 years ago
I use SE Ranking as a reporting tool and generally find it useful. On running an audit against a 4.3 site using plugins mainly from Seven Spikes it reports 7489 errors and 8729 semi important fixes. I'll analyse these in detail later to ascertain where they emanate from but here as some of issues being reported:
1. despite setting URL Rewrite rules in IIS that should redirect all traffic to https://www etc the http is not redirecting. Before you ask, I have the ssl checked in store.
2. I am aware he false errors can occur is certain paths are spidered and therefore would expect to add those to as a Disallow in robots.txt. I can see a robots file is generated but cannot fid where to edit it. I want to Disallow: /login to prevent this being augmented with a returnUrl querystring which is responsible for many alleged URL's being too long.
3. Paging reports a duplicate title as you would expect because it is after all the same category page!
4. Titles and descriptions are often reported as being too duplicate or in the case of titles too long. This is invariably because a query string is being added to the SEO friendly page name but this returns the same or largely similar results. I therefore want to prevent all querystrings from being indexed because these won't actually add to the quality of search engine indexed results and could prove detrimental if left as they are.
5. There are several issues relating to mobile improvements including caching, render blocking, JS parsing time and so on, but that may well be down to the recently added plugins that are not as yet configured.
Have any other experienced any similar issues, or run similar tests against your site?
3 years ago
I read you can add robots.additions.txt file into the root of the web project (not wwwroot but the root at the same folder level as web.config)

a quick search on the source code suggests Nop will read this and append...

From     CommonModelFactory.PrepareRobotsTextFile()




                //load and add robots.txt additions to the end of file.
                var robotsAdditionsFile = _fileProvider.Combine(_fileProvider.MapPath("~/"), "robots.additions.txt");
                if (_fileProvider.FileExists(robotsAdditionsFile))
                {
                    var robotsFileContent = _fileProvider.ReadAllText(robotsAdditionsFile, Encoding.UTF8);
                    sb.Append(robotsFileContent);
                }

...
3 years ago
Thanks Jon, I did create a robots.additions.txt file but placed this into the wwwroot folder. I've now moved it and confirmed that it added /login? to the robots.txt output, just need to work out what to do with the filters which alters the likes of /mens-casual-jackets to /mens-casual-jackets#/attrFilters=6m!#-!5720&prFilter=To-250. Whilst this reduces the number of records shown in the category it appears to be treated as a different page resulting in errors being flagged that there is duplicate page names and content.
3 years ago
I had similar errors where google bot tries to index pages which have URL content as "returnUrl". It just mess with google search console coverage, since returnUrl pages usually have duplicated results.
Better add the robots.additions file with the below content.

User-agent: *
Disallow: */login?returnUrl=*
Disallow: */register?returnUrl=*
Disallow: */login?returnurl=*
Disallow: */register?returnurl=*

My search console coverage results jumped from 26000 to 42000 in a month after I updated the robot.txt with the additional file.
2 years ago
Thanks! We've added a new disallow rule to the robots.txt file: Disallow: /*?*returnUrl=. Details here.
This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.