Digital Transformation

How to ensure you have the right software development to optimise your product’s success

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to the AI Experience newsletter and join 50k+ tech enthusiasts.

In technology, the success of a product is often determined by its quality which in turn is dictated by its development. A good product must have seamless operation with as few bugs as possible and be constantly updated with new features.

These are desirable features that can only be achieved by having the right software development. Following a few essential steps, software development will seem a much less daunting stop on your product roadmap.

Good specification

Every successful project starts with a good specification documentation. Whether it is derived from a list of client requirements or it stems from an internal idea that will flourish into a company product, it must be carefully thought through. A good software specification will contain both functional requirements, such as how an application responds to specific user input, and non-functional requirements, such as the level of security needed, coding standards and any other standards that the final product must adhere to.

Consistent team feedback and communication is key at these steps, as input should not only be taken from the stakeholders, but from the actual developers as well. This will prevent scope changes down the road due to technological limitations or due to the unforeseen costs, be this an actual monetary cost or through the time taken.

Time spent on whiteboarding ideas and passing them between team members for feedback can be seen as a long term investment, whose returns will be evident during software development. As a tip, drawing tablets and virtual whiteboards can be used effectively in times of online brainstorming.    

Forward thinking

To ensure a streamlined software development, project management should be tightly integrated with product development.

The first recommendation is to split a project into smaller manageable pieces which allows the software engineers to focus on specific tasks without having to worry too much about the big picture. Essentially, this is what software development is at its core. Break down large and complicated problems, into smaller separate issues that can be individually solved.

Smaller achievable goals offer the development team the drive they need to reach a product’s success. Project planning is a separate large beast of its own, described in more detail within different software development practices, such as Agile, Kanban and Waterfall dedicated just for this. The important nugget of information here is to have a forward-thinking mentality coupled with goal-oriented planning.   

Secondly, there is the software planning, or more specifically, the software architecture behind the solution that solves the problem. This is where the ‘big picture’ comes into place, and good forward thinking ensures that the end goal will be met.

The first step is to consider the platform that is used. Is it all in the cloud? Does it run on x86 or ARM machines? What is the operating system that is being used?

These are the type of questions that must be asked and answered. After this has been discussed, the larger solution can be split into modules and a big part of architecture is how they are all interconnected.

Experience is key here and this is where seniority plays an important role, as it is essential to accurately gauge the size of a project as this will determine the level of modularity. A singular application with high coupling will not be maintainable and it will be very difficult to scale at large when necessary. On the contrary, a highly modular system which is only intended to solve a very simple and specific problem is what is often called overengineering, and it can be just as unmaintainable as the previous example with no added real benefit.

Furthermore, good planning ensures that the underlying software foundation allows the developers to focus more on the present problem and less on code refactoring due to missing building blocks.

Take for example logging and debugging.

At some point, every project will acquire bugs.

A good logging module, that offers different levels of verbosity, when implemented properly throughout the program, ensures that small and annoying bugs do not require an entire week’s worth of debugging session to fix. They can be considered the breadcrumb trail which leads to the core of the problem.            

Use the right tools

Before writing the actual code, make sure the team has the right tools to work with.

First and foremost, there should be either real or simulated hardware available for which the software product is intended. If it is an embedded system, have the right development kit or manufactured model A prototypes at hand. If it is a mobile application, make sure the virtual devices have similar hardware specifications as those at which the product is intended, for example, similar screen resolution. If It is a desktop based application that is intended to run on a machine with similar specifications as the machine it is being developed on, then this is the only case when it does not matter. However, it is still highly recommended to test it on a separate machine. “But it works on my machine” is not a reasonable excuse today. 

Once you have the hardware figured out, the next step is to choose the software development environment. There are two options. The first is to make your own system using a build system such as ‘Make’ or ‘Ninja’ and let the developers use any text editors that they prefer, such as Visual Studio Code, Sublime or even Vim.

The advantage of this approach is flexibility and the fact that it is much easier to integrate the codebase into the CI/CD pipeline, but unfortunately, it requires additional resources to build from scratch and maintain. The other option is to use an IDE suitable for the project that you are developing. For example, for an embedded project, this would be the manufacturer’s recommended IDE.

The advantage of this approach is that it is very much plug-and-play, with little effort from the developer required. For embedded systems, the IDE also comes with extra features such as driver libraries and SDKs, tools for flashing a microcontroller and controls to manipulate the internal registers, and neat debugging features including profiling.

Unfortunately, this all comes at the cost of customizability, with automation server integration for CI/CD often requiring workarounds. However, for embedded systems, there is a third option as well, which is a combination of the two previously mentioned. Some chip vendors support the generation of a more generic build system like a ‘Makefile’ based one that is not tied to a specific vendor IDE, therefore giving the advantage of faster project setup and easy access to vendor libraries whilst still maintaining a high level of flexibility.

Version control is a key element that should not be missing from any software project. It allows developers to safely make changes and keep track of those throughout the development process. Whether a distributed approach is used, such as Git, or a centralised one, such as Subversion, it is paramount to use it especially as it is an additional layer of backup, as long as developers submit and synchronize their changes often with a remote server. Furthermore, it also enhances parallelism, with each developer being able to work on his individual task on a separate branch and merge it into the main code base once he is finished. Modern version control systems can easily integrate reviews and testing into a streamlined process, ensuring a quality product can be delivered on time.   

A last item of discussion that can be considered as an essential tool is having the right libraries. It is impossible to write a modern application without using any libraries unless all you are doing is blinking an LED in assembler code. However, try to have as low coupling as possible by using abstraction layers so that you can easily exchange libraries or use in-house ones for better performance or other reasons.

One of such reasons might be licensing – make sure to fully read and understand licensing implications of open-source libraries and only use them if you fully understand the requirements and limitations. Use automated tools such as WhiteSource to ensure you are compliant. Furthermore, try to only use libraries that are either actively maintained or at least well used and tested. This is especially essential for embedded products and device drivers.

Often, new microcontrollers and microprocessors come out with very basic drivers that might not have all the functionality that you need, for example, an SPI driver being synchronous when you are building an asynchronous system.  In such scenarios, if the timescales of the project allow it and it is a reasonable task to achieve, writing in house libraries is always better. However, this only applies to simpler bits such as device drivers and might not be applicable to more complicated libraries as for example a Transport Layer Security (TLS) library which is a large project on its own.

In the same mindset, for AI projects, make sure you have a good dataset prepared or at least a planned method of obtaining them. For example, if you are working on a vision AI project, do not rely on the fact that the internet is full of pictures, spend the time to do the proper research and build your dataset early.   

Review early

Two last essential steps are required to guarantee a product’s success: reviews and testing.

To ensure development is on the right track, reviews should be done consistently throughout the development process, not just at the end when the product is ready to be released and is undergoing final testing. It is much easier to refractor code at the beginning of development as the code base grows larger with time and coupling increases.

On the same note, it is far more effortless to document code as it is being written due to the thought process still being fresh in the mind of the developer. This is more in line with Agile methodology, but it should also apply to more linear approaches such as a Waterfall model, just applied less often.

Reviewing is not guaranteed to spot all possible bugs and edge-case errors, but it can detect them early. However, what reviews excel at is keeping a maintainable and easy to follow code. A developer focused on his particular task might sometimes fail to notice when a section is not well documented or it has far too much coupling. During the review process this is easily spotted by a fresh pair of eyes and the next engineer that has to deal with the same code will be very grateful.

A common saying when writing code is ‘be kind to your future self’, as it is likely you will forget the intricacies of certain code sections and it is likely that ‘future you’ will have to revisit it in the future.  Furthermore, reviews should be done often enough so that there is still time to refactor bad code, but they shouldn’t be done so frequently that it becomes a major blocker.     

Use the right tools, such as linters and static and dynamic code analysis programs, to ensure that the reviewer focuses on the maintainability, logic and reasoning behind the code to be reviewed, and less on checking if the curly braces have been placed at the end or at the beginning of the line. 

Testing

The last essential step is testing. One of the success metrics of a project is delivering on time. To ensure that no compromises have been made to the quality of the work, it must be thoroughly tested. This means that there should be multiple means of testing. First you have the unit testing and self-testing routines implemented in the code itself. It is often a requirement that critical projects have a 70-80% unit test coverage.

Next, you have the black box unit testing, where developers or users who are not familiar with the code will try to use the system and find bugs, either by using it as intended or by doing what most end-users do and not follow the instructions.

Finally, you have edge-case testing, where you test against specific unlikely scenarios that have catastrophic consequences. In particular, the latter must be taken very seriously in critical applications. Remember the Y2K bug? I wonder how many 32-bit embedded systems being built today are designed to cope with the year 2038 problem.

In Today’s software development environments, where products are continuously updated with new features and bug fixes, Continuous Integration and Continuous Delivery (commonly referred to as simply CI/CD) tools offer the means to test entire systems automatically as they are built and tested as soon as new code is merged in. It is most successful in finding backwards compatibility bugs, where new features might break old functionality which might not be manually tested against as it is not part of the new features. It also excels at reducing the time it takes to get production ready releases.

The more streamlined the CI/CD pipeline is, the quicker the entire software development process is. This also guarantees that the updates that the end-user will get when he updates his shiny new product from you or your company works flawlessly. This is less often done with embedded systems, as real hardware has to be emulated and full testing require specialised test fixtures, but it is becoming more common.      

Conclusion

This is not a definitive guide on software development and it certainly is not complete, but following these basic steps will ensure your product’s success. Use this as your building block and cherry pick what seems right for your environment, adding new bits as necessary.

Now that you know what it takes to have the right software development to ensure your product’s success, are you ready to bring your idea to life?

This is not a definitive guide on software development, and it certainly is not complete, but following these basic steps will ensure your product’s success. Use this as your building block and cherry-pick what seems right for your environment, adding new bits as necessary. Additionally, consider incorporating professional software development services to enhance the efficiency and quality of your product. Outsourcing development tasks to experienced teams can provide specialized expertise, accelerate the development process, and ensure the implementation of industry best practices.

Author

  • Cezar Chirila

    Cezar Chirila is an Embedded Electronics Engineer at the electronics design and software development consultancy firm Ignys in Nottingham, United Kingdom. He focuses on the software and hardware development aspects of embedded systems using ARM microcontrollers and microprocessors. In his spare time, he enjoys 3D printing and documenting his own projects.

Related Articles

Back to top button