In this post, I want to bring an interesting perspective on how we do software.
Random pattern matching at scale
People seem to be very enthusiastic about recent AI events, thinking about AI writing software and being very successful with it. I wish to bring attention of the reader to the fact that we have already been doing AI (like) programming for at least a decade. We have been hiring random people as software developers who don't have the slightest idea of how computers work and what good software is.
They are making random changes to the code, handcuffed by the compiler, frameworks and static analyzers which supervise their work, making sure obvious mistakes are caught and alarmed early. We divide the huge work into very small steps with obvious success definition so we can easily derive a gradient for the project as a whole. We also hire other people to make sure everyone is brute forcing a compiling code using provided tools. These people job is to periodically ask how things are going, if anybody is stuck and if need any help to get back to effective brute forcing. In this culture, even mentioning the handcuffs exist is very quickly met by the "best practices" schooling and is shot down as non-professional.
This does work
This strategy works, because we have trained ourselves an army of
solution space searchers who try random things at scale.
If you do changes mindlessly, It's called silly, but when you do it
at scale It's called machine learning. Or something like that,
right?
We have been raising an army of brain dead code mutators who are
expected to notice simple patterns in the text statements and
copy/paste them from one place to another without really
understanding what they are doing.
CI/CD green - good; CI/CD red - bad.
This does work well. Until it doesn't. Until something is broken and no one really know how to fix it. At that time, we usually try doing overtime what we have been already doing, hoping we will soon get out of the local minimum or eventually we ask for advice someone who actually have a deeper understanding of computing. Most of us still won't understand why it works, but they can see it does, so problem solved.
What is the said understanding?
I don't know. It feels like humans have some kind of way of seeing
patterns emerging at scale and making a good estimate or even be able
to prove that given "sequence of moves" will result in solving the
problem. This is what AI seem to be missing right now. Some kind of
ability of building worlds that stand on foundation of set of rules
and emulating their work.
INB4: I don't think this is some unique human trait that cannot be
reproduced in the machine. It's not here yet, but it doesn't mean It's
not just right around the corner.
Going back to the title of this post
Understanding details and fundamentals of computing and not just pattern matching and copying is important and while AI might help us with the projects where we are using most of our software developers for right now, they seem to be still missing something that humans can do without the ability of heavy computation, by just building sets of rules and doing some relatively cheap reasoning in our heads.