Programming with AI is neither magic nor impossible, it is a challenge: tricks and strategies to squeeze AI for programming.

Home · AI Blog · Basic concepts · Programming with AI is neither magic nor impossible, it is a challenge: tricks and strategies to squeeze AI for programming.

If someone tells you that using language models (LLMs) for programming is a piece of cake, they are either deceiving you or have been lucky enough to find a formula that eludes most. Because, let’s be honest, getting an LLM to write good code is not intuitive. It’s like having an assistant who can be incredibly efficient… or totally unpredictable.

Here’s the dilemma: there are those who get spectacular results and those who end up frustrated with incoherent answers or code that simply doesn’t work. What’s the difference? Expectations and strategy.

So I have based this on the article by Simon Willison, a prestigious British programmer, to gather here the best strategies for making AI your travel companion from now on (the kind you don’t want to get out of the car).

First, you have to assume that LLMs are not magic tools. They are models that predict the next word based on patterns, and while this makes them tremendously useful, it also makes them unpredictable. They are great at writing code, but that doesn’t mean they always do it well. Sometimes, they can invent functions that don’t exist or make mistakes that a human would never make.

The trick is to understand that an LLM does not replace the programmer, but rather amplifies their ability.

It’s not about throwing out a vague idea and expecting production-ready code, but about leveraging them as a fast assistant, capable of generating examples, automating repetitive tasks, and helping you explore new ways to solve a problem.

But of course, they also have their limits. Their knowledge is anchored to the date they were trained, so if a framework changed radically after that moment, the model won’t have a clue. To deal with this, you need to provide updated information or thoroughly verify what it generates.

So, first lesson: 💡 if you expect an LLM to do the work without supervision, you’re going to be disappointed. But if you use it as a strategic boost, you can gain speed and creativity in your code like never before.

The trick is in the context: how to get the most out of LLMs

If there’s a secret to making the most of a language model when programming, it’s this: context is everything. It doesn’t matter if you’re using ChatGPT, Claude, or Gemini, the difference between receiving useful code or an absolute disaster largely depends on how you handle the information you give.

First, you need to understand how these models work. They don’t have long-term memory; all they know at any given moment is what’s within the context of the conversation. ☝🏼 Every time you start a new chat, it’s like talking to someone who has just been born and has no idea what you told them before. This means that if the conversation starts to go awry, sometimes the best thing is to restart from scratch.

Now, some development environments that integrate LLMs have found ways to improve this. Tools like Cursor or VS Code Copilot can automatically feed the context with the content of open files or even relevant documentation. This allows the model to work with more information and provide much more accurate responses.

But even without advanced tools, there are ways to manually improve the context. ☝🏼 One of the best strategies is to give concrete examples. Instead of asking for something in a generic way, provide previous code that follows the structure you need and ask it to adapt it. Another option is to feed the model with several similar solutions and ask it to create a new one based on those patterns.

It’s also key to know when to feed the model with external documentation. If you’re using a library that the model doesn’t know (because it came out after its last update), you can copy and paste recent examples and have it take those into account.

And something that many people don’t take advantage of: ☝🏼 LLMs can improve significantly if you make them work in iterations. Don’t expect them to give the perfect answer the first time. Try asking for a simple version, check if it’s on the right track, and then ask it to optimize or make it more advanced. This way, you have control over the outcome and can shape it to your needs.

💡 Second important lesson: if you learn to manage context, you can make an LLM go from being a repetitive parrot to a true programming assistant.

From digital intern to coding companion: clear instructions, better results

Here’s another fundamental key: ☝🏼 LLMs work better when you tell them exactly what to do. If you treat them like an experienced team member who “already knows” how everything works, they will disappoint you. But if you see them as a digital intern who needs precise instructions, things change.

When you ask an LLM for code, the difference between a mediocre result and a useful one lies in how you explain the task. It’s not the same to say:

🔴 “Make me a function in Python to download a file”

than

🟢 “Write a function in Python called download_file(url: str) -> pathlib.Path that uses httpx to download a file and save it in a temporary folder. If the file exceeds 5MB, it should raise an error. Return the path of the saved file.”

The first prompt is too vague and the model could invent anything. The second one clearly states what it has to do, how the function should be named, what parameters it should receive, what technology to use, what constraints to follow, and what it should return.

Another foolproof strategy: ☝🏼 have the model write code in blocks. First, ask for the main function. Then, tell it to write the unit tests with pytest. After that, ask it to add error handling. This way, you maintain control and can correct at each step.

And something that surprises many: ☝🏼 dictating code in English can be more efficient. Although LLMs understand multiple languages, their training is often based on documentation and code in English, so they often achieve better results if you give them instructions in that language.

What’s the final result? If you know how to give clear instructions, the LLM not only saves time but also returns cleaner code with fewer errors.

Speed, learning, and errors: the winning combo of LLMs

If there’s one thing LLMs really shine at, it’s speed. It’s not that they do everything perfectly, but when you learn to use them well, they allow you to develop faster and test ideas that you otherwise wouldn’t even bother trying.

Think about those projects you have in mind but never start because you’re too lazy to research every detail from scratch. That’s where an LLM can change the game. Instead of wasting hours looking for how to structure something, you can ask for a prototype in minutes and go from there… And yes, the initial code may not be perfect, but at least you have a starting point.

Moreover, ☝🏼 using LLMs is a brutal way to learn. If you give them good context and know how to guide them, they can explain code to you, suggest improvements, and even show you alternatives you hadn’t considered. It’s like having a tutor who never gets tired of answering questions, no matter how many times you ask the same thing.

That said, one thing must be clear: ☝🏼 errors are part of the process. You can’t blindly trust what an AI model returns to you. You always, always have to test the code. No matter how convincing the answer sounds, if you don’t execute and validate it, you can’t assume it works.

But here’s the interesting part: errors also help to understand the limits of the tool. If you discover that a model always fails at a certain task, you can anticipate, provide more context, or even change your approach.

In short, LLMs are not a magic wand, but if you use them well, they can make you code faster, learn more, and take your projects to another level. However, the final control should always be in human hands.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *