If you like DNray Forum, you can support it by - BTC: bc1qppjcl3c2cyjazy6lepmrv3fh6ke9mxs7zpfky0 , TRC20 and more...

 

Cron Scheduling: Non-Stop Script Execution

Started by Austin, Jun 23, 2024, 12:55 AM

Previous topic - Next topic

AustinTopic starter

Greetings. I'm looking to execute a script at ten-second intervals, with the total cycle not exceeding one minute. Admittedly, I'm not overly familiar with the process.
This script's purpose is to monitor certain applications to ensure they're operational. Should it find an application active, it will terminate it; if inactive, it does nothing.

In my research, I stumbled upon a method:

* * * * * for i in {1..6}; do /bin/sh /home/123/1.sh & sleep 10; doneThis approach suggests that cron will initiate six separate tasks in the background, each spaced ten seconds apart, within a single minute.
Suppose I introduce an additional, analogous task:

* * * * * for i in {1..6}; do /bin/sh /home/123/1.sh & sleep 10; done
* * * * * for i in {1..6}; do /bin/sh /home/123/2.sh & sleep 10; done
Would this setup function as intended without causing any disruptions?
  •  


drunken

The approach you've outlined using the cron scheduler is a valid method for executing scripts at regular intervals. However, there are a few considerations to keep in mind:

1. Timing Accuracy: Cron is not designed to provide precise timing, and the actual execution of the scripts may not be exactly 10 seconds apart. This is due to the way cron schedules tasks and the potential for system load or other factors to affect the timing.

2. Resource Utilization: Executing multiple scripts simultaneously, even if they are spaced 10 seconds apart, can potentially lead to resource contention and impact the overall system performance. This is especially true if the scripts are resource-intensive or interact with the same applications.

To address these concerns and ensure the reliable execution of your monitoring tasks, I would recommend the following approach:

1. Use a Dedicated Scheduling Tool: Instead of relying on cron, consider using a dedicated scheduling tool like systemd timers or a task runner like Cron Daemon. These tools provide more precise timing control and can help manage resource utilization more effectively.

2. Implement a Single Script: Instead of running multiple scripts in parallel, consider combining the functionality of the scripts into a single script. This will help ensure that the monitoring tasks are executed in a coordinated and efficient manner, reducing the risk of resource contention.

Here's an example of how you could structure the script:

#!/bin/bash

# Define the applications to monitor
apps=("/home/123/app1" "/home/123/app2")

# Function to check the status of an application
check_app() {
    local app="$1"
    if pgrep -f "$app" > /dev/null; then
        echo "Application $app is running. Terminating it."
        pkill -f "$app"
    else
        echo "Application $app is not running."
    fi
}

# Execute the monitoring tasks in a loop
for ((i = 0; i < 6; i++)); do
    for app in "${apps[@]}"; do
        check_app "$app"
    done
    sleep 10
done


In this example, the script defines an array of applications to monitor, and the `check_app` function checks the status of each application. The main loop executes the monitoring tasks six times, with a 10-second delay between each iteration.

By using a single script and a more robust scheduling tool, you can ensure that the monitoring tasks are executed reliably and efficiently, without causing any disruptions to the overall system.

Additionally, you may want to consider logging the results of the monitoring tasks or implementing some form of error handling to better understand the state of the applications and any issues that may arise.
  •  

friv10games

I would craft a dynamic web interface that allows users to seamlessly schedule and manage their scripts.

1. I would design a responsive web application that features a user-friendly dashboard. Here, clients could easily set up scheduled tasks, with intuitive controls to adjust the interval - be it one minute or any custom duration.

2. To execute the script, I would create a visually appealing file uploader, where users can drag and drop their script files. Upon upload, the system would automatically grant the necessary execution rights.

3. The web app would then generate a snippet of code that the user can simply copy and paste into their script. This code would handle the sleep timers, allowing for flexible durations ranging from 10 seconds to 50 seconds, as needed.

4. Finally, I would integrate a robust cron job management system into the web application. Users could view, edit, and monitor all their scheduled tasks from a central location, ensuring a seamless and efficient workflow.

I aim to create a solution that not only meets the functional requirements but also delivers an exceptional user experience. The goal is to empower users with a visually engaging and intuitive platform to manage their scripts and automate their workflows with ease.
  •  

Roowlinonia

The issue that may arise is if the initial task fails to complete before the subsequent one begins.
If the script requires an extensive duration or has the potential to expand substantially over time, there is a considerable risk of generating multiple processes, which could lead to consequences such as memory allocation failures, resource contention, and overall system instability.
  •  

reza10

If you want to run these scripts in parallel, you should consider using a job scheduler that supports concurrent jobs, like GNU Parallel. This tool can execute multiple commands at once, which could be beneficial if your scripts are independent and don't interfere with each other. Here's how you might do it:

* * * * * parallel -j 6 'for i in {1..6}; do /bin/sh /home/123/{1,2}.sh & sleep 10; done'
  •  


If you like DNray forum, you can support it by - BTC: bc1qppjcl3c2cyjazy6lepmrv3fh6ke9mxs7zpfky0 , TRC20 and more...