The last weeks were pretty crazy when it comes to security issues that affect basically everyone. Applications, virtualization stacks like QEMU, CI/CD platforms, operating systems, kernels and even components that were considered stable and trusted for years suddenly became part of critical discussions again.
What makes this wave of vulnerabilities different is not only the technical impact itself. It is the way many of these issues were discovered. Bugs that survived years of reviews, audits, production usage and thousands of developers looking at the same code are now being uncovered within days. In many cases the common factor is artificial intelligence assisted research.
It was honestly not easy to find a proper title for this post because all of them somehow describe the current situation perfectly.
- The Age of AI Vulnerability Discovery Has Begun
- Decades of Trust, Minutes to Break: AI and Modern CVE Discovery
- The End of Human Scale Security Engineering
- Fragnesia, DirtyFrag and Friends: The AI Bug Hunting Explosion
But in the end I ended up with a question instead:
- Programming After AI: Are Humans Still Good Enough?
All of these headlines point to the same uncomfortable reality. Modern software has reached a level of complexity where humans alone may no longer be able to fully understand every corner case, race condition and unexpected interaction hidden deep inside millions of lines of code.
The scary part is that most of these vulnerabilities were not introduced yesterday. Some of them existed silently for years. They lived through releases, rewrites, migrations and security reviews without being noticed. Then suddenly AI assisted analysis enters the picture and starts finding them one after another.
This raises a serious question for the future of software engineering and security research. Are we slowly reaching the point where humans can still build systems, but no longer fully secure them without machine assistance? And if AI becomes better at finding vulnerabilities than humans are at preventing them, what does that mean for the future of secure software development?
The Past Where Humans Were Coding and Reviewing
For decades software development was primarily driven by humans alone. People learned programming languages, understood algorithms, studied operating systems and network communication, and eventually started building their own applications, libraries and platforms. Every piece of software reflected the knowledge, experience and mindset of the person who created it.
Becoming a skilled developer was never something that happened within weeks or months. It often took years to truly understand how software works internally and how different systems interact with each other. Programming was never only about writing code itself. Developers also had to understand operating systems, APIs, protocols, databases, virtualization, authentication, networking, hardware behavior and many other surrounding components. A good engineer did not only know how to write functions, but also understood the complete environment in which those functions were running.
The quality of code always depended heavily on the capabilities of its author. An experienced developer could write elegant, performant and maintainable code that survived for years with only minimal adjustments. Others produced software that worked, but consumed unnecessary resources, scaled poorly or became difficult to maintain over time. Performance optimizations, memory management and architecture decisions were often directly tied to the knowledge level of the developer.
At the same time humans also introduced mistakes into their own code. Sometimes these mistakes were small and harmless, resulting in minor bugs or unexpected behavior. In other cases they created severe security vulnerabilities that remained hidden for years. A missing boundary check, incorrect assumptions about concurrency, unsafe memory handling or misunderstood access controls could eventually become the foundation for critical security issues.
Software engineering therefore evolved around the principle that reviews would minimize those problems. Developers reviewed each others code, tested changes, discussed implementations and tried to identify weaknesses before software reached production. This process became one of the most important foundations of modern software development. Pull requests, peer reviews, testing pipelines and quality gates were all designed around the assumption that humans are capable of validating the work of other humans.
However, this approach always had natural limitations. A developer who wrote a specific function or subsystem usually understood it very well. The original author remembered why certain decisions were made, which workarounds existed and where edge cases had already been considered. The longer someone worked on the same project, the more the code became an extension of their own thinking.
For external contributors or newcomers the situation was often completely different. Even understanding relatively small functions could already become difficult. Variable names reflected internal assumptions, undocumented dependencies existed between modules and logic paths evolved over years of changes. Code that looked simple at first glance could hide complex interactions that were only understood by the original maintainers.
As projects grew larger this problem scaled dramatically. Modern systems are rarely isolated applications anymore. Today software interacts with APIs, databases, virtualization layers, authentication systems, container runtimes, orchestration platforms, message brokers and external services all at the same time. A single logical mistake in one component can create unexpected side effects across entirely different systems.
This increasing complexity also changed the nature of security issues. Early software bugs were often relatively direct and easy to identify. Modern vulnerabilities frequently emerge from combinations of conditions, timing issues, race conditions or interactions between components that were never expected to communicate in a dangerous way. The famous phrase that software is eating the world also means that software complexity is becoming impossible for single individuals to fully understand.
Another important factor was human exhaustion. Reviewing thousands of lines of code every day is mentally demanding. Developers become tired, deadlines create pressure and companies prioritize feature delivery over perfect engineering. Even highly skilled reviewers can miss critical issues simply because modern code bases exceed human attention spans. Many vulnerabilities that are discovered today are not necessarily hidden because developers were careless, but because the amount of logic and possible execution paths became too large for humans to reason about consistently. So, at this point it doesn't wonder anyone that out of sudden many security issues just came up by AI like Dirtyfrag, Fragnesia, pTrace or QEMU issues and many other ones. The bad thing? Even some of those fixes like Fragnesia were inconsitently fixed where just a bit later some new PoCs (e.g., Fragnesia 2 by v12sec) raised up.
There was also a strong culture of trust in software engineering. Stable code that survived years in production was often considered safe by default (which is nowadays definately a wrong assumption). Projects with many contributors and large communities created the impression that somebody would eventually notice severe issues. But recent discoveries show that this assumption was overly optimistic. Vulnerabilities can survive years of development, thousands of commits and countless reviews without being detected. All for fair reasons!
At the same time humans also brought creativity and intuition into software development. Developers understood business requirements, made architectural tradeoffs and adapted systems to real world needs. Human engineers were not only writing instructions for computers, they were building abstractions and ecosystems around constantly changing requirements. This creative aspect became one of the greatest strengths of human driven software engineering.
The problem is that creativity alone does not guarantee correctness. As systems became larger and more interconnected, the gap between what humans can build and what humans can completely understand started growing rapidly. For years the industry accepted this as a normal reality. Bugs were expected, patches became part of daily operations and security updates evolved into a permanent lifecycle rather than exceptional events.
Looking back, the past era of software engineering was defined by an important balance. Humans created the systems, humans reviewed the systems and humans tried to secure the systems. The entire process depended on collective experience, collaboration and trust in human reasoning. The question today is whether this model can still scale in a world where software complexity grows faster than human comprehension itself.
Can Humans Still Scale in Software Development vs AI?
The question today is whether the traditional model of software engineering can still scale in a world where software complexity grows faster than human comprehension itself. Personally, I believe we are currently witnessing one of the biggest shifts the software industry has ever experienced. And just a few years ago, many of us completely underestimated it.
When the first public AI coding tools appeared, the overall reaction from developers was often somewhere between curiosity and amusement. The generated code was full of mistakes, inefficient logic, security issues and sometimes complete nonsense. AI generated applications looked impressive at first glance, but once developers actually reviewed the code, the weaknesses became obvious almost immediately.
Many engineers and developers, including myself, looked at those early results and thought the same thing: there is no way this replaces experienced developers. The gap between human engineers and machine generated code simply looked too large. AI lacked architectural understanding, context awareness, long term reasoning and deep technical intuition. At least that was our assumption at the time.
The reality is that our imagination was simply too limited.
What many people failed to realize was the speed at which these systems improve. Humans often think in relatively linear progressions. AI development over the last years was anything but linear. Models improved simultaneously across nearly every domain imaginable. Text generation improved. Image generation improved. Voice synthesis improved. Video generation improved. Reasoning improved. Context understanding improved.
One of the most famous examples can probably be seen in the evolution of AI generated videos around the well known “Will Smith eating spaghetti” example. Early generations looked surreal and broken. Hands melted into objects, faces deformed and movements made no physical sense. People laughed at it because it looked absurd and obviously artificial.
But every new generation improved dramatically. What looked impossible one year suddenly became convincing the next year. By 2025 the generated videos already became surprisingly realistic. By 2026 they started looking almost indistinguishable from reality itself. The progress between iterations was honestly difficult to fully comprehend in real time because humans naturally compare against the present rather than against where things started.
The exact same development happened in software engineering.
Early AI generated code was often unusable in production environments. Today models are capable of generating complete applications, APIs, infrastructure definitions, database schemas, Kubernetes manifests, CI and CD pipelines and even complex integrations between distributed systems. Modern models are no longer only generating isolated functions. They are increasingly capable of understanding relationships between entire systems.
And this changes everything!
AI does not get tired while reviewing code. It does not lose focus after reviewing the five hundredth pull request of the week. It does not forget edge cases because of stress, deadlines or lack of sleep. It can continuously analyze enormous code bases, compare implementation patterns, evaluate dependencies and search for dangerous combinations of conditions without interruption.
This becomes especially visible in security research. The recent wave of vulnerabilities discovered in operating systems, kernels, virtualization stacks and infrastructure software demonstrates something important. Many of those vulnerabilities existed for years, even for systems that were known for their higher security approaches like FreeBSD and OpenBSD (so it isn't limited to Linux as many ones will try to tell you). Some survived countless reviews by highly experienced engineers and security researchers. They were hidden inside massive code bases that simply exceeded human capacity for complete reasoning, just as mentioned before.
AI changes this dynamic completely.
A machine can traverse execution paths endlessly. It can compare thousands of possible states, analyze race conditions and evaluate interactions between components at a scale that humans realistically cannot maintain consistently anymore. This does not necessarily mean AI “understands” software the same way humans do. But understanding is not always required to identify dangerous behavior patterns.
Another important difference is operational scale. Humans work in limited timeframes. Engineers need breaks, sleep and context switching. AI systems can operate continuously twenty four hours a day and seven days a week. They can scan repositories, generate patches, write tests, validate assumptions and perform repeated analysis without exhaustion.
Of course this also creates dangerous scenarios. Leaving autonomous systems completely unrestricted can absolutely end badly. AI can generate insecure code just as quickly as secure code. It can automate exploitation, accelerate malware development and massively reduce the entry barrier for offensive security activities. The same technology that helps defenders can also empower attackers.
This is why boundaries and controlled environments become increasingly important. AI works best when humans define goals, limitations and architectural direction. Rail guards matter. Context matters. Oversight matters. At least for now.
But even within those boundaries the productivity difference is becoming impossible to ignore. A single engineer assisted by advanced AI systems can now achieve output levels that previously required entire teams. Tasks that once consumed days can now be completed within hours. Documentation, refactoring, test generation, code reviews and vulnerability analysis are increasingly becoming partially automated workflows.
Good Developers Will Become Better With AI, Bad Ones...
One thing has become very clear over the last years. AI will not affect every developer equally. Good engineers who already understand systems deeply will become dramatically more effective with AI assistance. Bad developers on the other hand will most likely struggle harder and harder to justify their role in the future.
This may sound harsh at first, but software engineering was never only about writing syntax into an editor. The real value of experienced engineers always came from understanding architecture, dependencies, scalability, security, performance and operational behavior of systems in production environments. Those skills do not disappear with AI. In fact, they become even more important.
A strong engineer can already take an idea, split it into logical components, define boundaries, identify risks and understand how systems should communicate with each other. When AI enters this workflow, the same engineer suddenly gains the ability to execute those ideas at an entirely different scale and speed.
Tasks that previously consumed entire weeks can now be delegated almost instantly. Documentation can be generated automatically. Refactoring can happen across large repositories within minutes. APIs can be scaffolded quickly. Infrastructure definitions can be created, adjusted and validated continuously. Tests can be generated in parallel while another AI agent analyzes security issues and another one reviews dependencies or optimizes performance bottlenecks.
The most important shift is that good engineers no longer operate alone. They effectively start managing a team of AI agents that can work continuously and simultaneously across multiple scopes, technologies and programming languages.
A single experienced developer can now orchestrate workflows that previously required multiple junior developers, senior engineers, QA teams and documentation writers. The engineer no longer spends the majority of time manually implementing every single detail. Instead, the role becomes more strategic and supervisory.
This transformation also does not only affect traditional software development itself. It affects nearly every technical role across the entire IT industry. DevOps engineering, infrastructure automation, CI and CD development, system administration, cloud operations, platform engineering, SDN based networking and even offensive and defensive security research are all being reshaped by AI accelerated workflows.
A modern engineer can now generate infrastructure deployments, validate Kubernetes configurations, optimize network policies, analyze firewall rules, simulate distributed environments and audit security relevant code paths simultaneously with AI assistance. Even highly specialized areas that previously required years of focused experience are increasingly becoming partially automated through intelligent tooling.
This is also one of the major reasons why the security industry currently experiences an almost overwhelming amount of new discoveries every single day. AI systems continuously scan enormous code bases, compare logic paths, analyze race conditions and identify unexpected interactions between components. Vulnerabilities that remained hidden for years are suddenly discovered one after another because machines are capable of reviewing software at a scale humans simply cannot maintain consistently anymore.
The huge increase in newly discovered vulnerabilities does not necessarily mean software suddenly became worse overnight. In many cases those issues already existed for years. What changed is the speed and depth at which AI assisted research can now analyze systems.
This is where the huge role shift begins.
In the future, fundamental knowledge about system architecture will likely become one of the most valuable skills in the industry. Understanding how components interact, how distributed systems behave under load, how APIs should be designed, how security boundaries are enforced and how data flows through complex environments becomes far more important than simply remembering syntax details for a specific language.
Prompting itself also becomes a skill. Not because prompts magically replace engineering knowledge, but because clear instructions require clear thinking. A developer who cannot structure problems properly will also struggle to guide AI systems effectively. Good prompting is often nothing more than good engineering communication translated into machine readable instructions.
Talking about “translated into machine readable instructions”, this is also a part that becomes more and more interesting as AI is not just a single scope and adapts to many other things. For this at least, there were also ideas to improve the results by different approaches. One of them is JSON prompting, where prompts are no longer written purely as natural language but instead structured into clearly defined machine-readable fields and instructions. Instead of vaguely describing an outcome, JSON prompting allows developers and researchers to define context, tasks, constraints, examples, output formats, priorities, and even behavioral expectations in a deterministic structure that AI systems can process far more consistently. The result is often significantly more predictable, reusable, and precise outputs — something that becomes increasingly important as AI moves deeper into software engineering, infrastructure automation, security research, CI/CD orchestration, video generation, and large-scale operational workflows.
At the same time boundaries become critical. AI systems can accidentally leak sensitive information, expose credentials, generate insecure implementations or interact with systems in unintended ways if they are not controlled correctly. The engineer therefore becomes responsible for defining strict operational guard rails, permissions and limitations around those systems.
This means future engineers will increasingly act like architects, supervisors and coordinators of autonomous workflows rather than pure code producers. The actual implementation work gets delegated to AI agents while humans focus on validation, priorities, architecture decisions and business requirements.
Ironically, this also changes the traditional mentoring structure inside engineering teams. Historically large parts of senior engineering time were consumed by onboarding juniors, reviewing pull requests, explaining system behavior and correcting implementation mistakes. AI systems now increasingly absorb many of those repetitive tasks.
Instead of mentoring several junior developers individually, a senior engineer might supervise multiple AI agents that already operate at relatively high implementation quality. Reviews still exist, but they move toward higher abstraction levels. The engineer checks whether the overall direction is correct rather than manually validating every single line of code.
This does not necessarily mean junior engineers disappear completely, but the expectations for entry level positions will likely change dramatically. Companies may need fewer people for repetitive implementation work because AI can already automate large parts of it. The remaining human roles become more focused on reasoning, architecture and oversight.
And this creates a dangerous situation for weaker developers.
People who mainly relied on copying code snippets without deeply understanding systems may suddenly find themselves competing directly against machines that can do exactly that, but faster and at significantly larger scale. Developers who never evolved beyond basic implementation tasks could face increasing pressure because AI systems continuously improve in those areas.
Good engineers however become amplified by AI!
A highly skilled developer with strong architectural understanding and operational experience can suddenly execute ideas at a speed that was previously impossible for individuals. One person can coordinate backend services, infrastructure automation, security analysis, frontend development, testing and deployment workflows simultaneously with AI assistance.
This creates an enormous productivity gap between engineers who adapt and engineers who do not.
The future therefore may not necessarily belong to developers who write the most code manually. It will probably belong to developers who best understand systems, define correct boundaries, communicate intent clearly and successfully orchestrate intelligent machines toward reliable outcomes.
In many ways software engineering is shifting from manual implementation toward intelligent delegation.
And the engineers who learn how to control that delegation effectively will likely become more powerful than entire development teams from only a few years ago.
Is the Future AI Coding?
Personally, I increasingly believe that the future of software development is moving toward AI driven engineering. Not because humans suddenly became unintelligent, but because the surrounding complexity of modern systems has reached a level that humans alone can no longer fully process consistently.
Modern infrastructure is no longer a single application running on a single server. Today software interacts with distributed APIs, virtualization layers, Kubernetes clusters, CI/CD pipelines, cloud environments, authentication systems, network overlays, message queues, storage backends and external third party services simultaneously. Every additional layer introduces new states, new dependencies and entirely new possibilities for unexpected interactions.
Humans are simply limited in how many execution paths, dependencies and edge cases they can reason about at the same time. Even highly experienced engineers eventually overlook conditions, assumptions or side effects because the amount of surrounding logic becomes too large. This is exactly why so many vulnerabilities survived for years inside trusted software stacks before AI assisted analysis started exposing them one after another.
And this is where AI changes the entire landscape.
The real power does not only come from a single AI model generating code. The real shift happens once multiple AI agents start working together. One agent can generate implementations, another reviews the code, another validates architecture decisions, another creates tests, while another continuously searches for security issues, race conditions or unexpected behavior patterns. Suddenly workflows become possible that humans alone could never realistically maintain at that scale.
The interesting part is that this no longer feels theoretical. It is already happening. AI systems can continuously create, review, validate and test thousands of different possibilities without exhaustion. Machines do not lose focus after reviewing the five hundredth pull request. They do not forget dependencies because of stress, deadlines or lack of sleep. They can continuously compare patterns across enormous repositories and identify combinations that would never realistically become visible to individual engineers.
What becomes increasingly obvious is that software development slowly shifts away from manual implementation toward orchestration. The engineer of the future may no longer write every single line of code directly, but instead coordinate intelligent systems that perform the majority of implementation and validation automatically.
But maybe this transformation is only one part of a much larger change that is currently beginning.
Languages and Human Communication
One of the most important areas in this entire development is natural language processing itself. In many ways it already affects daily life far outside traditional software engineering.
Human communication has never been perfectly deterministic. You can see this in normal conversations with friends, family members, partners or co-workers every single day. Something that sounds completely obvious and clear to one person might be interpreted entirely differently by somebody else. Humans constantly misunderstand context, intent, priorities and hidden assumptions.
The same principle also applies to large language models.
Prompts often appear clear to the person writing them while the model interprets them differently internally. This is exactly why prompt engineering became such a major topic. Developers try to improve outputs through better structured prompts, clearer constraints, examples, negative instruction lists and increasingly deterministic formatting approaches such as JSON prompting.
But in reality this is still relatively inefficient.
Natural language itself was built for humans, not for machines. Human language evolved around abstraction, emotions, ambiguity and incomplete context. Machines however work best with precision, structure and deterministic interpretation. This creates a strange middle layer where humans continuously try to translate thoughts into machine understandable instructions.
And this raises another interesting question for the future.
What if programming languages themselves eventually change completely?
Current programming languages were heavily designed around human readability and human maintainability. Languages evolved to make software easier for humans to write, review and debug. Abstractions, syntax simplifications and high level frameworks all primarily exist because humans need understandable systems.
But what happens once humans are no longer the primary implementers?
If software generation becomes largely autonomous, readability for humans may slowly lose importance. Machines do not require elegant syntax, beautiful abstractions or easy to understand naming conventions. AI systems could eventually operate on highly optimized intermediate representations or entirely new language structures that are designed purely for machine efficiency, optimization and performance rather than human comprehension.
Ironically, this would almost feel like a return to earlier eras of computing. Many engineers still remember assembly language development where humans worked much closer to machine instructions directly. Over time higher level languages abstracted away complexity because humans needed better maintainability and scalability. But if AI becomes the primary actor behind software generation, the industry could eventually move back toward representations that humans barely understand anymore.
This creates a difficult contradiction.
On one side fully machine optimized languages could massively improve performance, optimization and automated reasoning. On the other side humans would increasingly lose direct understanding of the software they still depend on. Validation itself becomes difficult once humans can no longer realistically review the generated logic manually.
And this may become one of the most important challenges of the next generation of software engineering: how do humans maintain trust in systems they can no longer fully understand themselves?
The Transition Phase Already Started
Personally, I am pretty sure that we are already inside the transition phase today.
Many engineers are no longer operating in a purely manual development workflow. Large parts of the industry already work in an AI assisted mode, whether openly acknowledged or not. Developers use AI for implementation, refactoring, documentation, infrastructure generation, debugging, testing and security analysis every single day. The productivity improvements are already too large to ignore.
At the same time, fully autonomous AI driven software development still feels slightly too early right now. Even though many so called “vibe coded” projects sometimes make that statement look questionable, they also reveal the current limitations very clearly. A lot of those projects look impressive initially, but long term maintainability, architectural consistency and operational reliability often still become problematic over time.
However, I do not think this limitation will remain for very long.
The next major iteration will most likely focus heavily on maintainability, autonomous validation and long term reasoning. Once AI systems become better at continuously managing larger code bases across extended periods of time, the remaining gap between AI assisted and fully AI driven development could shrink extremely quickly.
And after that, the next iteration may become even more disruptive: a complete language shift itself.
Programming languages may eventually evolve away from human readability toward machine optimized representations. Performance could improve dramatically, automated reasoning could become more efficient and systems could operate at scales that humans would no longer realistically maintain manually anymore.
The tradeoff however may be severe.
We could gain incredible efficiency while simultaneously losing direct human readability and understanding. And maybe that is ultimately the real future of software engineering: humans defining intent, boundaries and goals while autonomous systems increasingly handle the actual implementation, communication and optimization layers entirely on their own.
Whether we fully like that future or not, it already feels much closer than most people currently realize... And it might become very scary at all!