How is the text source determined by search engines?

Started by Adam Greer, Aug 20, 2022, 01:32 AM

Previous topic - Next topic

Adam GreerTopic starter

Question: Let's say we have two websites. I publish a unique article on site A, then I publish the same article on website B. Suppose that the article page on site B is indexed faster than the article page on website A. Question: Will the search engine consider an article on site B as unique, and on website A as plagiarism?


The search engine will do just that. But in your story, that no longer matters, since the articles were not originally yours. So I don't think it's worth worrying too much about it. If you want search engines to index articles faster, then eliminate all errors on the website and fill it more often.

Then the bots will come in more often, and the likelihood that this article will be indexed ahead of you will increase. But generally, I don't think that there will be such a wild hype for a specific article from a specific "dead" website and the fact that this particular article will be published by someone else on their site at exactly the same moment as you. According to probability theory, the probability of such an event tends to zero.