If you like DNray Forum, you can support it by - BTC: bc1qppjcl3c2cyjazy6lepmrv3fh6ke9mxs7zpfky0 , TRC20 and more...

 

Parse several million domains

Started by jobtardis, Dec 17, 2022, 10:37 AM

Previous topic - Next topic

jobtardisTopic starter

When parsing a large number of websites to gather information from their main pages, one often faces the challenge of slow and insufficient software. Is there a solution to this problem? Perhaps a more efficient parser or software capable of distributed parsing could be the answer.

 
  •  

LaurenShawn

I'm not sure what could cause a slowdown. While I usually use Python, this task can be done using any language. To achieve very high speeds, it's advisable to use multiple threads or machines.

Let's do some simple math:
In terms of quantity -
Assuming an average page is fetched in 1 second (some faster, some slower, but on average 1 second), you can retrieve 3600 domains per hour and around 86.4 thousand domains per day, which means you could theoretically gather everything within 65 days. If you need to be faster, run in multiple threads.

In terms of volume -
The average weight of a web page is now about 2mb. If you want to collect one page per second, you need to be able to process 2 megabytes per second, which requires a minimum connection speed of 16 megabits per second. If you run in 75 streams, collecting 75 pages per second, you need to be ready to pump 130 megabytes per second, which requires a minimum connection speed of gigabits per second.

You can only get a stable gigabit connection in data centers, which may not be available on a home tariff. Therefore, it's better to conduct parsing from several locations with a good link if there are time requirements. The total amount of stored data will be around 0.3 terabytes on average, assuming pictures are included. However, if you're only storing pure text, it'll be 50 times smaller.

Overall, the specific requirements and capabilities determine how to approach this task such as what information to extract from the page, whether you need to guarantee crawling each page, the required time for crawling, and the budget for expenses.
  •  

MARK PETERSON

It's possible to eliminate irrelevant and useless sites from the list by carefully searching and excluding all kinds of "garbage" stub sites that occupy domain names previously owned by businesses. This step is necessary to ensure that the list contains only active and relevant websites.

Regarding growth rates, it would be more informative to evaluate statistics from previous years since a small increase can be attributed to economic reasons, especially when the world economy is currently experiencing stagnation. A broader perspective from previous years can provide more insight into the long-term trends and patterns of the analyzed data.
  •  

anum982

Web scraping can be a resource-intensive task, often hampered by inefficiencies in software, slow internet connections, and emerging complexities from websites' anti-scraping measures. Here's a comprehensive approach to solve these issues with an emphasis on advanced techniques and tools that improve the efficiency of parsing:

1. Optimize Your Scraper
  - Use Asynchronous Requests: Instead of sending requests sequentially, employ libraries like `asyncio` in Python or `Node.js` with libraries such as Axios to make non-blocking HTTP requests. This enables your scraper to execute multiple requests simultaneously, drastically reducing the time it takes to gather data.
 
  - Multi-threading or Multi-processing: If your tools don't support asynchronous operations, consider implementing multi-threading or using multiprocessing libraries. This allows your scraper to utilize multiple CPU cores for simultaneous data fetching.

2. Implement Distributed Systems
  - Use Distributed Scraping Frameworks: Platforms like Scrapy's Scrapy Cluster or open-source tools like Apache Nutch can help you scale your scraping efforts across multiple instances. By spreading your scraping tasks over several servers (or containers), you can dramatically improve performance and data collection speed.

  - Cloud Resources: Leverage cloud computing services like AWS, Google Cloud, or DigitalOcean. Deploy your parsers on virtual machines or Kubernetes clusters to scale according to your workload. These platforms also provide load-balancing options to allocate resources efficiently.

3. Data Collection Strategies
  - Focus on APIs: If the websites you are targeting provide APIs, use them instead of scraping the HTML content. API requests are usually more stable, faster, and less likely to be blocked by the target site.

  - Throttling and Smart Scheduling: Implement intelligent scheduling to crawl websites at varying intervals. This reduces the load on the servers you're scraping, minimizing the chances of getting banned. Tools like Scrapy's AutoThrottle can help automate this process.

4. Handle Anti-Scraping Techniques
  - Use Proxies: Rotate through a pool of residential or datacenter proxies to mask your IP address and avoid detection. Services like Bright Data (formerly Luminati) and ScraperAPI provide reliable proxy solutions.
 
  - User-Agent Rotation: Change the user-agent string in your requests to mimic different browsers. This can help bypass simple anti-scraping mechanisms that block requests based on user-agent patterns.

5. Optimize Data Processing
  - Data Storage Solutions: Use databases that can handle fast writes and reads, like MongoDB or PostgreSQL, as the destination for your scraped data. Utilize batch inserts to reduce the overhead of individual insert operations.
 
  - Parsing Libraries: Utilize powerful parsing libraries like Beautiful Soup or lxml (for Python). These libraries can efficiently extract data from HTML and XML documents.

6. Monitoring and Maintenance
  - Set Up Monitoring Tools: Use logging and monitoring solutions to track your scraper's performance, such as Grafana or Prometheus. This can help you identify bottlenecks or errors in real-time.

  - Regularly Update Your Parsers: Websites change frequently, so it's crucial to maintain and update your scraping logic. Develop a testing suite to ensure your parsers adapt to these changes seamlessly.

Improving the efficiency of scraping operations is a multifaceted challenge that requires a strategic blend of advanced technologies and best practices. By optimizing your scraper for speed and robustness and leveraging distributed systems, you can tackle high-volume data extraction tasks more effectively.
  •  


If you like DNray forum, you can support it by - BTC: bc1qppjcl3c2cyjazy6lepmrv3fh6ke9mxs7zpfky0 , TRC20 and more...