Most of the software I have written for work has no near on no functional / unit tests. It is industrial process monitoring software that runs 247 on remote machines thousands of miles away. Simple fact is it has to work, no if's but's, even when there are faults in its environment.
This does not mean to say that the software has not been tested.
Testing is subjective to a softwares use. You could have 10000 tests and still have a series of bugs.
Testing for my work comes as two categories, "Tested by design" and "Environmental Testing". "Tested by design" involves developing your software by which the designing and writing the software inherits the tests as you go. When I go about designing a feature, I expand on a chosen solution of that feature and then methodically branch out on the uses of that feature, build a map of dependancies, scenarios, outcomes, consequences etc for that feature. These become the basis of how I write the code for that feature because I have inherantly put in mitigations for everything I have considered before hand. There is no point in testing something excessively if you can and/or have garanteed by code that it will perform exactly as designed. That may seam a contraversial thing to say.
With "Environmental Testing" we simulate the environment which the software is put into with real external factors. This is particular for the software I write. This is not always the best for another software project.
I've written my works software from scratch 3 years ago and it has had various incremental additions and changes. Yes there have been bugs. I will admit that entirely. However the key thing with work's software is that is has rarely crashed or rarely faulted while containing those bugs. Most faults that have been found have been configuration errors/issues.
I would be interested in how others approach testing strategies for the software they have written.
This does not mean to say that the software has not been tested.
Testing is subjective to a softwares use. You could have 10000 tests and still have a series of bugs.
Testing for my work comes as two categories, "Tested by design" and "Environmental Testing". "Tested by design" involves developing your software by which the designing and writing the software inherits the tests as you go. When I go about designing a feature, I expand on a chosen solution of that feature and then methodically branch out on the uses of that feature, build a map of dependancies, scenarios, outcomes, consequences etc for that feature. These become the basis of how I write the code for that feature because I have inherantly put in mitigations for everything I have considered before hand. There is no point in testing something excessively if you can and/or have garanteed by code that it will perform exactly as designed. That may seam a contraversial thing to say.
With "Environmental Testing" we simulate the environment which the software is put into with real external factors. This is particular for the software I write. This is not always the best for another software project.
I've written my works software from scratch 3 years ago and it has had various incremental additions and changes. Yes there have been bugs. I will admit that entirely. However the key thing with work's software is that is has rarely crashed or rarely faulted while containing those bugs. Most faults that have been found have been configuration errors/issues.
I would be interested in how others approach testing strategies for the software they have written.