The quest to make programming empirical
Every single empirical discipline (think biology, physics, medicine, chemistry) would be very happy to have a useful and realistic model of the objects under study, instead of having to rely on expensive experiments to learn anything.
In computer science, we do have this luxury: we know exactly how a computer works to a degree of fidelity that makes other scientists jealous. Sure, the low-level details like understanding cache misses, tricky interactions between components, or concurrency issues can be complicated. But, at least in principle, we can reason about the behaviour of any software to understand exactly its properties.
And yet, it seems to me that software engineering is willingly moving to an empirical approach.
Instead of thinking really hard about how to code something, we copy-paste Stack OverflowLLM-generated code we do not understand, and then test it and tweak it until it seems to do the job.
Instead of thinking hard about how fast software runs, we time it. We are then surprised when an edge-case makes the system unresponsive.
Instead of building secure systems, we build systems and then pen-test them.
Instead of using memory-safe programming languages or simply using C in a memory-safe way, we rely on hacks such as ASLR.
When a bug inevitably escapes through the safety nets, we respond by “improving” the development process: we add more checks along the way, none of which is bulletproof. Other disciplines are forced to do this, but we can surely do better.
It does seem that some algorithms, like neural-networks based inference, are inherently resistant to deductive reasoning. But we should aim to reduce the use of such techniques, much like mechanical engineers like to reduce moving parts.
Instead, we as a community seem to be going the other way. Why?