Community Information
-
Technical Founder POV: The age of one button LLM app generation are a ways off but... [I will not promote]
I’ve been having all of the frontier models write code over the last few months. The overall experience has ranged from frustration to awe. I’ve been a professional software engineer since 2001, however my father taught me how to code in BASIC in the mid-‘80s when I was a little kid. I’ve literally been coding my entire life. LLM code gen is slowly becoming a reality, but there are some caveats. If you’re not a technical person, you will probably have a bad time, but keep in mind that right now the state of the art is the slowest, dumbest, and most expensive these models will be (hopefully). Here are some thoughts in no particular order: * AI is for real, but your results will vary wildly depending on how you’re able to work with a particular model. * Pick a language popular with AI/ML/Big Data engineers (i.e., pick Python). The reality is the people behind these models are all biased toward Python. In my opinion, the best results are in Python. * Force your favorite model to ask clarifying questions first. For whatever reason, these models tend to blast code out first and ask questions later. Make it do the inverse. You’ll find that sometimes *your* assumptions are wrong. * Force your favorite model to use a search feature to ensure dependencies, best practices, and API calls are up to date. Remember, many of these models may pull API schemas that are several years out of date. If you get stuck, make sure the documentation is accessible. Sometimes it’s not (I’m looking at you, Brave Search API). In those cases, print the documentation as PDFs and add them to the context of your chat. * When possible, suggest that the model write unit tests first. This optimization allows you to provide structured feedback to the model more rapidly. My highly unscientific (and biased) opinion of the daily usefulness of the models is: Claude 3.5 Sonnet w/ MCP enabled search > Deepseek R1 > o3-mini > o1. I want to believe in the OpenAI models, but literally today I had to take code from o3-mini to Claude to fix an error I was getting. This error was from the OpenAI API. Like, I couldn’t get o3-mini to fix the damn problem, but Claude could... WTF? Anyway, LLM coding isn’t at the level where just anyone off the street can reliably write solid code, but in the hands of an experienced developer who needs to wear a lot of hats (i.e., a startup technical founder), it’s definitely worth leveraging!4
© 2025 Indiareply.com. All rights reserved.