How will the server respond to more than 50 SQL queries?

Started by PlotHost, Mar 16, 2023, 07:16 AM

Previous topic - Next topic

PlotHostTopic starter

This is the situation: there is a game portal where the game runs on node.js. This node then sends data to a php server, which calculates the result and writes it to the database.

Here's how it works: users play the game, and node.js sends an array of users along with their game points to the php script. The php script determines the winners, and then several requests need to be made to the database:

1) Retrieve the game data and verify its accuracy.
2) Log that a specific game has been played.
3) Make four requests for each player:
   - Retrieve player data.
   - Update the rating and other information.
   - Record that the player has played the game (e.g., at 20:21, the user won or lost).
   - Update the data.

The game can have from 2 to 4 players, and there are a total of 10 games. Around 1000 people can play each game.

Based on calculations, processing the game results for 4 players requires around 30 queries to the database.

Now, let's consider the impact of increased traffic on the server. For instance, if 100 games are played simultaneously, that means we would need to make 2000 requests to the database. Is this a problem? And can a web cluster help solve the issue if it becomes overloaded?


The incoming data is quite abstract, with no specific mention of the database or server involved. The functionality of the PHP script is also unclear, leaving room for uncertainty.

In essence, the impact shouldn't be significant. The design of the database, particularly the structure of the data tables, plays a crucial role in its performance. If you examine any modern CMS, you'll find numerous SQL queries used to generate a single page. Despite this, websites manage to handle thousands of requests thanks to caching and other optimizations.

Optimizing queries can also help improve performance, such as retrieving player data in a single request whenever possible.

To address uneven load, one approach is to have the node write data to a file while PHP acts as a daemon or processes the data at regular intervals. This method allows for better distribution and handling of the workload.

Uncertainty surrounding the specific details of the setup and functionality can indeed make it challenging to provide precise recommendations. However, the principles of efficient database design, query optimization, and load balancing remain important factors to consider when dealing with increasing traffic.

It's worth noting that as traffic grows, it becomes crucial to prioritize scalability and performance optimization. This may involve exploring alternative technologies or architectures, such as using distributed databases or implementing caching mechanisms.

Ultimately, finding the most suitable solution depends on various factors, including the complexity of the application, available resources, and the expected growth of traffic. Regular monitoring and performance testing can help identify potential bottlenecks and optimize the system accordingly.


During queries, temporary tables can be created implicitly, which is an important factor to consider.
However, the impact of 2000 requests, each taking a few milliseconds, will likely cause some delay but not anything critical.
On a multi-core server, it might take a second or two for the server to process these requests, but the impact should not be significant.
To get a better understanding of the behavior, one can try using "ab" to simulate a queue of 100 concurrent requests on a specific page and observe its performance. This simple test can provide more concrete insights rather than relying solely on speculative assumptions.


There are two types of DBMS:
- Blocker: In this type, all requests to the server return readable values without any issues. The execution speed and logic of operations are straightforward. When a record is modified, all operations are temporarily blocked until the changes are completed.
- Transactional model: This model allows reading and modifying records. When records are modified, temporary copies are created, waiting for confirmation or cancellation of the changes. Until the changes are confirmed, other records can only access the data as it existed before the modifications.

The choice between a blocker and transactional model depends on the specific requirements and nature of the application. Blocker systems provide simplicity and ease of use, ensuring that requests return consistent data after the changes are completed. On the other hand, transactional models offer more flexibility by allowing other operations to see the previous state until changes are confirmed.

The selection of the appropriate DBMS architecture also depends on factors such as concurrency requirements, data consistency, and performance considerations. It's essential to carefully analyze the application's needs and consider the potential impact of each approach on system efficiency and user experience.