Updated …
- 2024-11-08: Added section at the end on published and planned episodes
- 2024-11-19: Added episode 3-5
- 2024-11-25: Added episode 6
- 2024-11-28: Added episode 7-10
I can hear you loud and clear …
Ooohhh my god: Not another blog post about AI!!!
Bear with me. I will try to make reading it (or watching it) worth while your time. I promise.
Let’s set the stage. My opinion is …
The planet has problems (over-population, climate-change, poverty and economic inequality, cancer and alzheimer, …). AI is coming. And I believe AI can help us to solve some of these problems or make them more managable (e.g. AlphaFold).
But there are also a lot of emotions. In my case it is excitment. Other people feel fear and uncertainty. Mainly because they are afraid that AI will kill their jobs and/or will take over the planet (a la Terminator or Matrix or Transcendence; the latter one being less well-known compared to the other two, but in my opinion the best of all three).
With every innovation comes a risk. Right now we are learning how to forge metal. In the worst case we will build weapons that have the potential to destroy us. In the best case we will build scalpels and put them into the hands of skilled surgons.
We obviously want the latter. And guess what: This blog post is NOT about how to get there! Feel free to read all of the other blog posts that talk about the journey and what we need to do to mitigate the risks to ensure a good outcome.
I like to think of myself as a realistic optimist. I think we can get this right.
Now let’s narrow it down a bit, shall we. Let’s just talk about LLMs (as a subset of AI technologies) and what the impact is going to be on how we build and own software systems in the next 5 years or so.
Clearly there is a fear out there that LLMs will be(come) super (10x) software engineers and will start to build software systems on their own and will write all the code to implement them and that all software engineers will loose their jobs.
Then there is a kohort of people who believe that LLMs are basically useless for software engineering because all productivity gains will be eliminated by the increased cost of reasoning about what the LLM just did and/or the increased cost of maintaining a system that you do not understand anymore.
As always I think the truth is in the middle. Not only for software engineering, but for most applications of AI.
I think using AI does not mean that AI will replace humans (at least not any time soon). Instead I think that over the next 10 years we will see that in a majority of cases AI will complement and augment humans and will give them super-powers (like the scalpel in the hands of the skilled surgon).
Means I predict that in the next 5 years we will see the rise of the
Augmented Software Engineer (ASE)
. The ASE knows how to use AI to
develop better software faster and run/own/maintain bigger systems at
a lower cost and with higher uptime.
How big the productivity gains or cost savings will be, I don’t know. But I am pretty sure they will be bigger than 0%. Means I am also pretty sure that engineers or engineering organisations that do not know how to use AI to build better systems faster, will end up with a competitive disadvantage.
Sssooo …
Over the next couple of month I want to do and publish a couple of screencasts to explore what is working already and also what is maybe not working yet. How should a software engineer use AI today to start the journey to become an ASE over the next 5 years?
Spoiler alert: Good software engineers write code for less than 2 hours/day. For the rest of the day they work with others to make sure they build the right thing and build it right.
Means to just talk about how LLMs can write code is the wrong conversation. We need to think and talk about how AI can help us to make the entire day more effective and efficient.
For instance let’s forget about writing code. For every line of code that we write, we need to read 100 lines of code. Let’s figure out how AI can help us to read more code faster.
One interesting question to ask is, if the value of the LLMs goes beyond the number of saved keystrokes. Maybe the a bigger value is that some of the cognetive load for the easier more mundane tasks gets reduced and you end up with more energy and brain power for the tasks that need it.
Let’s also figure out how AI can help us with the architecture and design of the system and the maintenance of the system.
There is also research that shows that AI can help you to be(come) more effective and efficient with the more mundane (well-understood) parts of the job, but … that it might be dangerous to rely on AI to jump over the learning phase, because you loose control over the system you build. One idea could be that you first have to learn about the problem and the tools and the technologies on your own and only after you understand what you are doing and why and what are the options and alternatives and the drawbacks and after the first implementation, you will start to use AI to refine the and iterate on the first implementation.
Even more, even earlier there is a risk for younger engineers that they will never learn enough problem-solving and critical-thinking skills to make good decisions and develop good judgement.
Last but not least, AI will also change the way we as software engineers work together in (maybe) unexpected ways. Communication is obviously a big part of our job. We write a lot. We read a lot. Not only code. But also documents and presentations and slack messages. We need to share information. We need to align on how good and right looks like. Good engineers do not only write things down. They also prepare presentations and then deliver them and record them. Or prepare a screencast in the first place that can be consumed whenever it makes sense. For some people this is easier than for others. Some people like writing better than presenting or recording a screencast. For these engineers AI can become an enablement tool and a communication improvement tool, because they can use an avatar to deliver the content and their message.
And yes, the video above was generated with heyGen.
I hope that by now we can all agree that there is a lot to think about, talk about and explore.
For the screencast series I envison the following “seasons” …
-
Season 1: Coding with LLMs
- Solving coding problems with the help of LLMs
- Using 99-scala-problems as a driver
- LLMs that I intend to use and evaluate …
- GitHub CoPilot, chatGPT 4o-with canvas, Antropic Claude, cursor.ai, codeium, super-maven, …
- Writing code and reading code
- Building systems and maintaining systems (fixing bugs, making changes, …)
-
Season 2: AI for the rest of the day
- Reading and writing documents
- Prepare and run good meetings
- Effective and efficient communication
-
Season 3: Today, tomorrow, next year, …
Every season will probably have ~10 episodes. Every episode will be ~5 mins. I am looking to publish one episode per week. Please like and subscribe. Feedback is welcome.
Let’s go …
Published episodes …
- Ep. -1 (transscript): What is the Augmented Software Engineer (my first heyGen video; and you can see it :))
- Ep. 0 (transscript): What will we do? How will we do it? Introducing the 99 Scala Problems
- Ep. 1 (transscript): Install GitHub CoPilot in VsCode. Fixing compile-time problems
- Ep. 2 (transscript): Fixing failing tests. Adding more tests
- Ep. 3 (transscript): Functions and features. What do we need/want?
- Ep. 4 (transscript): Context is king
- Ep. 5 (transscript): Stacks - What are our options?
- Ep. 6 (transscript): P50 - Kickoff
- Ep. 7 (transscript): P50 - Review
- Ep. 8 (transscript): P50 - Fixing issues
- Ep. 9 (transscript): P50 - More tests
- Ep. 10 (transscript): P50 - Lessons learned
Planned episodes …
- How-To: Solving a problem with GitHub CoPilot (GHCP)
- How-To: Solving a problem with Antropic Claude (AC)
- Know-How: Compare GHCP with AC
- Do this with the other LLMs too
- Best-Practise: Build the test first. And be very thorough with the development of the test. Do not JUST generate it. Use the development of the test to learn about the problem and possible solutions. You can then be more aggressive with the assistance you are taking when implementing the solution. Resist the temptation
- Know-How: How to use LLMs on a plane (without an internet connection)
- Best-Practise: Ask Copilot to explain problem 6 to us. And then add a precondition to make sure we are only testing words, means the string cannot have a space in it. That probably means that we also need to adjust the test(s)
- How-To: Do assisted reviews (pre-screen, pre-nitpick, sanity-checks against coding standards) for pull-requests (PR). Checking the PR against the original (feature) ticket. Identifying the best engineer to close a PR faster
- Best-Practise: When to use AI (for the toil and for research) and when not to use it (to solve a problem that you have never solved before; to generate tests AND implementations)
- Best-Practise: Who should use AI (junior vs. senior engineers) and how should they use it?
- Thoughts: Augmented Software Engineering is assisted Software Engineering. Not delegated Software Engineering. The AI is a Junior Software Engineer that you need to review. Not a Principal that knows better than you.
- ???
- ???
- ???