If you like DNray Forum, you can support it by - BTC: bc1qppjcl3c2cyjazy6lepmrv3fh6ke9mxs7zpfky0 , TRC20 and more...

 

Software Development: Quality Control Dilemma

Started by Pybeatag, Dec 21, 2023, 06:49 AM

Previous topic - Next topic

PybeatagTopic starter

As a project manager in a small tech company, we are currently working on our website, CRM, and other internal tools using common technologies like yii2 and MySQL. We have three full-time programmers and myself as a technician. Our workflow involves the use of JIRA and Bitbucket, and currently there are no automated tests in place, so I manually test closed tasks for errors.



The main problem we are facing is that almost all tasks closed by developers contain obvious errors that can be easily caught with minimal testing. This raises the question of whether it's normal for developers to push code without thorough testing or if it's expected for testers to discover these issues themselves. Personally, I am not satisfied with this disregard for the quality of the code.

Considering this situation, I am open to recommendations on how to address this issue. Should we hire a dedicated tester, consider changing our developers, or revise our processes? One suggestion is to introduce a system in JIRA to calculate the frequency of task returns for revision due to bugs and use this data to adjust our project timelines or offer incentives for tasks completed without errors.

What is your perspective on this? How do you suggest we improve the quality of our development process and minimize the occurrence of errors in the code?
  •  


PrimoPierotz

This issue not only impacts the overall quality of the product but also increases the workload for both developers and yourself as the technician.
In order to address this situation and improve the quality of the development process, I would recommend a multi-faceted approach.

Firstly, introducing a dedicated tester to the team could significantly enhance the testing process. A tester would be able to thoroughly assess each task, identify potential issues, and provide valuable feedback to the developers. This division of labor allows developers to focus on coding while ensuring that a separate expert is responsible for identifying and reporting bugs.

Secondly, it might be valuable to reconsider the existing workflow and processes. Integrating automated testing into your workflow, possibly using tools such as Selenium for web testing, can help catch errors early in the development cycle. This shift towards automation can reduce the burden of manual testing and provide more comprehensive coverage of the codebase.

Additionally, revising the development process to include code reviews can also contribute to higher code quality. By having another set of eyes look over the code changes before they are merged, potential errors and issues can be caught and resolved earlier in the development lifecycle.

The idea of using JIRA to track the frequency of task returns for revision due to bugs is an excellent initiative. By analyzing this data, you can gain insights into the performance of individual developers as well as the overall code quality. This information can then be used to adjust project timelines, identify areas for improvement, and even provide incentives for tasks completed without errors, thus encouraging a culture of quality within the team.
I suggest considering the implementation of automated testing, hiring a dedicated tester, revising the development process to include code reviews, and utilizing data from JIRA to make informed decisions. Embracing these changes will not only result in higher code quality but also contribute to a more efficient and effective development workflow.
  •  

MichaelGray

I find myself in a situation where there are superiors in the hierarchy above me who seem to believe that I am paying excessive attention to error handling. The truth is, any program has two main objectives: to execute tasks correctly with accurate input data and to prevent mistakes when dealing with incorrect input data. While the former can be straightforward due to evident guidelines, the latter is considerably more challenging as it often lacks clarity and is not commonly emphasized. Most individuals tend to rely solely on try/catch statements and consider that to be sufficient.

When it comes to programmers, it can be challenging to question them about their error handling strategies because they might not fully comprehend the context of the task at hand or the potential errors that could arise. In such cases, the project manager should take responsibility for ensuring comprehensive error handling, as the only apparent mistake is dividing by zero, while other potential errors require careful consideration within the specific context.

On the other hand, when selecting programmers, it would be beneficial to assess their attention to detail and ability to identify mistakes. Therefore, the question arises – how do you evaluate these qualities?

I cannot provide a definitive solution, but I would suggest initiating a discussion with the developers regarding their commitment to efficient work, establishing a deadline for verification, and defining transparent parameters for evaluating their performance (including subjective assessments, given your role as the supervisor). Upon the deadline, the least competent developer could be let go. Even before the evaluation period ends, you could begin searching for a replacement to motivate the team further. While this approach may induce stress, it can be likened to a radical treatment, like chemotherapy, which is necessary in critical situations.

It's also essential to work on preventing such problems rather than just solving them. If you have experience, share your knowledge with the developers, but also be open to learning from them, as their experiences continually expand and can offer valuable insights. At present, your feedback system is weak since issues only surface after the completion of a task. Spending time observing the developers in action would offer valuable perspective, perhaps by working alongside them on a task. What if there is a technical obstacle hindering the quality of development? This is something worth investigating.

As an engineer, I strive to promote a culture of thoroughness and attention to detail within my team, fostering an environment where learning and improvement are continuous processes. The challenge lies not only in rectifying errors but in preventing them through careful planning and assessment.
  •  

ucourtneypaq

The decision of determining the minimum acceptable quality level usually falls on the developer. However, if you believe that this level is not up to par, it's crucial to establish these standards beforehand, possibly through testing procedures.
Why has such a system developed? Well, it could be connected to the motivation system being used. For instance, if payment is based solely on task completion, it might incentivize issuing the most basic version to the client at the outset. Another reason might be the urgency to complete tasks as soon as possible. In some cases, the developer might lack the necessary expertise, in which case hiring a more qualified team member is essential.

At the start, it's preferable not to recruit testers. Instead, developers should test each other's work, fostering their own growth in quality. As for assessing the return on a task, endlessly searching for bugs is impractical, as the complexity of reality cannot be entirely covered by a finite algorithm.

Imposing sanctions for not immediately delivering an "ideal" product is the worst approach. Investing in quality will always yield benefits in terms of both time and money. The challenge is to find a balance between quality, cost, and time.

When evaluating competencies, it's understandable that they may be lower than those at major tech companies or financial institutions. While aiming to recruit highly skilled professionals, it's important to recognize the reality of the market and make the most of available talent.
  •  

Ionigohox

Your devs are probably cutting corners due to deadlines or poor culture.
The fix? Enforce mandatory peer reviews in Bitbucket, integrate CI/CD for automated testing even if basic, and track defect density per dev. Incentivize clean commits, but if they keep churning out buggy PRs, it's a hiring issue. Don't hire a tester to clean up their mess - that just enables sloppy coding.
  •  


If you like DNray forum, you can support it by - BTC: bc1qppjcl3c2cyjazy6lepmrv3fh6ke9mxs7zpfky0 , TRC20 and more...