Is Coding Dead? - DSBoost #52
Podcast Notes 🎙️
Should Kids Still Learn to Code? (Hunter Kempf) - KNN Ep. 182
In a recent statement, Nvidia CEO Jensen Huang has predicted the death of traditional coding.
In this episode, Ken and Hunter reacted to this news:
Key takeaways
With the latest advancements in LLM, it’s easy to see how programming will change dramatically, but the core principles will not.
Certain skills will remain essential, particularly in understanding when and where to apply AI programming.
In the future you will not write code with “computer language”, but with your native language. Instead of writing traditional code, you will create prompts that instruct AI to generate the necessary code. The so-called prompt engineering skill will evolve.
However, the argument that children should no longer learn programming is misleading!
Like mathematics, programming teaches logical thinking, creativity, and problem-solving skills. Learning to program can enhance one's ability to approach problems systematically. That is an invaluable skill not just in tech.
Programming is an awesome learning tool generally:
The feedback loop is super fast. You build things and can test them immediately if they run and work or not. You get an error and fix the error within minutes. You fail quickly but learn quickly as well.
Coding is open-ended. You can and must think outside the box, be creative, and use your imagination. You can build whatever you want.
Google’s new AI and it’s struggles
Google recently released its new AI model, Gemini AI, but the launch was anything but smooth.
The model generated historically inaccurate images and outputs that raised racial concerns, so Google paused the image generation tool.
Why did it happen?
A primary issue was the rush to release the product. Reports suggest that bad Gemini responses slipped through testing because there was a hurry to ship the app.
AI competition is the highest priority of top tech companies since this is the hottest topic on the market.
It turns out that the Gemini model itself does not power the Gemini app. Instead, an older text-to-photo model was used to supplement Gemini.
Google co-founder Sergey Brin admitted that the problems are from a lack of detailed testing and said that the model is still a work in progress.
"The model is politically leaning to the left. Why? We haven't fully understood."
- Sergey Brin
Now that’s a really dangerous territory. Even the creators of these AI models cannot fully understand what happens in the background!
Is it only Gemini?
All AI models have similar challenges, though they may not be as visually striking as those seen with Gemini.
ChatGPT was also criticised for it’s “political views”.
An AI model is, after all, a creation of humans and will only be as effective as the programmers and data behind it.
Poor performance could be the result of bad coding, inadequate training data, or a combination of both. AI is never wrong!