How to keep up with AI/ML as a full stack dev?
79 points by waspight 11 months ago | 77 comments- localghost3000 10 months ago> I most often can’t see any use case for AI/ML
I'm admittedly a skeptic on all this so take what I am about to say with a grain of salt: You should trust that voice. We're in a hype cycle. It was VR before and crypto before that. Big tech is trying _very_ hard to convince you that you need this. They need you to need this tech because they are lighting billions on fire right now trying to make it smart enough to do anything useful. Short of them coming up with a truly miraculous breakthrough in the next 12 to 24 months (very unlikely but theres always a chance) investors are gonna get fed up and turn off the money fountain.
It's always a good idea to learn and grow your skillset. I am just not sure this is an investment that will pay off.
- godelski 10 months agoML researcher here.
I will second this. Even if you think localghost is wrong about AI, it is important to always trust that voice of skepticism (to a limit).
But I will say that we are in a hype cycle and as a researcher I'm specifically worried about this. I get that we have to bootstrap because you can't say "we want to spend money on research" (why?), but if you make a bubble the goal is to fill that bubble before it pops. The more hype you make, the more money you get, but the quicker that bubble pops. My concern here is that too much hype makes it difficult to distinguish charlatans form honest people. Charlatans will jump from cool topic to the next (don't trust someone who was a VR founder, then a crypto founder, and now a ML founder. Trust people who have experience and can stick with a topic for longer than a hype cycle).
The big danger, is if charlatans dominate the space, the hype disappears, and then there is no money for everyone. So if you do believe in the possibility of AGI and that AI/ML can make the world better (I truly do), make sure that we don't over hype. There's already growing discontent for products pushed too early with too big promises. If you really believe (like I do), you have to get rid of the bad apples before they spoil the whole barrel.
- mnky9800n 10 months agoYes as someone who works in geophysics and AI I see a lot of people promising a lot of things that no neural network will be able to do no matter how much attention it has because good data is actually what people need and they typically lack it. There's a ton of use cases across geophysics for AI, I'm even organising a conference at the end of September about this. But imo there's a bigger need for better data and better software tools first.
- Yhippa 10 months agoThis is such a good perspective and thank you for posting. I agree with your statements and of all the hype cycles that have happened, I think this does have a real shot of becoming something. Because of that I think they’re going to keep throwing money at this until someone figures it out. Because what else is there left to grift on in tech right now?
- godelski 10 months ago
I wouldn't be doing a PhD if I didn't. PhDs are terrible. I'm amazed people do them for "the money" and not the passion.> I think this does have a real shot of becoming something.
My concern is who they throw money at, and even more specifically who they don't throw money at.> Because of that I think they’re going to keep throwing money at this until someone figures it out.
Something's off.... But I guess when Eric Schmidt is saying you should steal and ask for forgiveness later, I don't think anyone should be surprised when unethical behavior becomes prevalent.Some people known to do carpet pulls, no prior experience in ML, and throw together a shitty demo that any ML researcher should be skeptical of? $_$ PhD researchers turning their theses into a product? ._.
l̶i̶f̶e̶Hype finds a way. There's always something to grift.> Because what else is there left to grift on in tech right now?
The key thing to always recognize: grifters are people who have solutions and are looking for problems (e.g. hamstring AI into everything) while honest people have problems and are looking for solutions (i.e. people understand the limits of what we can do, the nuances of these things, and are looking to fill in that gap). I can tell you right now, anyone saying anything should be end-to-end AI is a grifter (including Google search). We just aren't there yet. I hope we get there, but we are quite a ways. Pareto is a bitch when it comes to these things.
- godelski 10 months ago
- mnky9800n 10 months ago
- ryandvm 10 months agoI do not understand the AI naysayers.
The other day I had an idea for a Chrome plugin. I'm a senior dev, but I've never made a Chrome plugin. I asked ChatGPT 4o if my idea was possible (it was) and then I asked it to create an MVP of the plugin. In 10 seconds I had a full skeleton of my plugin. I then had it iterate and incrementally add capability until it was fully developed.
I had to do some stylesheet tweaking and it asked for a permission that we didn't need, but otherwise it completely nailed it. Easily provided 95% of the work for my extension.
I was able to do in 60 minutes what would have probably taken several days of reading specs and deciphering APIs.
Is my Chrome plugin derivative? Yes. Is most of what we all do every single day derivative? Also yes.
How are people still skeptical of the value that LLMs are already delivering?
- jryan49 10 months agoIt's probably because it's providing different amounts of value to different people. For some people, it's not giving any benefits, and in fact making their work harder (me). They are skeptical because people naturally don't believe each other when their personal experience does not match up with another.
- BigParm 10 months agoIt's the best API searcher ever made but most people don't search APIs. They are waiting for it to make them a grilled cheese or something.
- daemonologist 10 months agoIt's the best API searcher for APIs which are used a lot. If you want do anything other than the most common thing it can be worse than useless. (I've been running into this in a project where I'm using Svelte 5, and the language models are only interested in telling me about/writing Svelte 4, and transformers.js, where they tend to veer off towards tensorflow.js instead. This despite me explicitly mentioning what version/library I'm using and the existing code being written for said version.)
Anyways, they can definitely be very useful, but they also have a golden path/winning team/wheel rut effect as well which is not always desirable.
- daemonologist 10 months ago
- jryan49 10 months ago
- 10 months ago
- godelski 10 months ago
- kwindla 11 months ago"Generative" AI/ML is moving so fast in so many directions that keeping up is a challenge even if you're trying really hard to stay current!
I'm part of a team building developer tools for real-time AI use cases (voice and video). I feel like I have three overlapping perspectives and goals re this new stuff:
1. To figure out what we should build I need to have a good understanding of what's possible and useful right now.
2. I talk to our customers a lot. Helping them understand what's possible and useful today (and what that might look like six months or a year from now) is part of my job.
3. I think this is a step-function change in what computers are good at, and that's really exciting and intellectually interesting.
My AI information diet right now is a few podcasts, twitter, and email newsletters. A few links:
- Latent space podcast and newsletter: https://www.latent.space/podcast - Ben's Bites newsletter: https://news.bensbites.com/ - Ethan Mollick newsletter: https://www.oneusefulthing.org/ - Zvi Mowshowitz newsletter: https://thezvi.substack.com/ - Rohan Paul twitter: https://x.com/rohanpaul_ai
- swyx 11 months ago(got pinged here from f5bot) thanks so much kwindla :)
always looking for ideas on how to serve this audience better. feel like there could be more I should be doing.
- jamestimmins 10 months agoHey Swyx, I'm a dev who did your (excellent!) email LLM course, so maybe I can give some info. I'm in the Latent Space Discord and have been trying to figure out what's next after the course. The challenge I've found is that most online discussions about LLMs are either very basic or assume a fair amount of context (true for the Latent Space discussion/podcast, as well as Karpathy's videos).
I've been trying to find the best next step and what seems fruitful from my vantage point are:
1. Cohere's LLM University - Seems to go more in depth into terms like embeddings that are still pretty unclear to me. 2. promptingguide.ai - For similar reasons, that it covers terms and concepts I see a lot but don't know much about. 3. Reading survey-level papers.
I'm including this info just in case it's useful to you, as I've really appreciated all the content you've put together.
One specific thing you or someone else could do that is simple yet high value is to create a list of "the first 20 LLM papers you should read". I've looked for this to build out more base knowledge, but have yet to find it. Suspect it would be helpful to others as well.
- jamestimmins 10 months ago
- fenesiistvan 10 months agoI have a VoIP software (https://www.mizu-voip.com/Software/SIPSDK/JavaSIPSDK.aspx) and I am trying to market it as the interface between AI and real time audio/video. It already has real-time in/out streaming capabilities, i just want to add some more helper methods to make it more obvious for AI input/output. Can you help me with a little feedback? I am trying to think with the mind of an AI developer and I am interested on your thoughts on how to implement the real-time interactivity for your software/service? Is our JVoIP library close to your requirements or are you going to use something completly different to interact with the endusers and/or backend services? (To what kind of software/service are you thinking more exactly to cover this part?)
- nickdichev 10 months agoWhat company do you work for? I am working in the field and curious what the product is
- swyx 11 months ago
- ynniv 10 months agoMy take requires a lot of salt, but… this time it’s different.
Try writing single page web app or command line python app using the Claude 3.5 chat. Interact with it like you might in a pair programming session where you don’t have the keyboard. When you’ve got something interesting, have it rewrite it in another language. Complain about the bugs. Ask it what new features might are it better. Ask it to write tests. Ask it to write bash scripts to manage running it. Ask it how to deploy and monitor it. Run llama 3.1 on your laptop with ollama. Run phi3-mini on your phone.
The problem is that everyone says they aren’t going to get better, but no one has any data to back that up. If you listen carefully it's almost always based on a lack of imagination. Data is what matters, and we have been inventing new benchmarking problems because they're too good at the old ones. Ignore the hype, both for and against: none of that matters. Spend some time using them and decide for yourself. This time is different.
- alisonatwork 10 months agoThe question is what does programming with an LLM get you over batteries-included frameworks with scaffolding like Rails or Django? If the problem only requires a generic infra solution put together by an LLM instead of a bespoke setup, why not look into low-code/no-code PaaS solutions to start with? Unless the LLM is going to provide you with some uniquely better results than existing tools designed to solve the same problems, it feels like a waste of resources to employ GPUs to do what templates/convention-over-configuration/autocomplete/etc already did.
The point isn't that LLMs are useless, or that they aren't interesting technology in the abstract. The point is that aside from the very real entertainment value of being able to conjure artwork apparently out of thin air, when it comes to solving practical problems in the tech space, it's not clear that they are achieving significantly more - faster or cheaper - than existing tools and methods already did.
You're right that it's probably too early to have data to prove their utility either way, but given how much time, money and energy many companies have already sunk into this - precisely without any evidence to prove it's worthwhile - it does come across rather more like a hype cycle at the moment.
- ynniv 10 months agoThe question is what does programming with an LLM get you over batteries-included frameworks with scaffolding like Rails or Django?
Three years ago an LLM would conversationally describe what the code would look like.
Two years ago it might crib common examples with minor typos.
Last year it could do something that isn't on StackOverflow at the level of an intern.
Earlier this year it could do something that isn't on StackOverflow at the level of a junior engineer.
Last week I had a conversation with Claude 3.5 that went something like this:
Elapsed time: a few hours. I didn't write any code. Keep in mind that unlike ChatGPT, Claude can't search the net for documentation - this was all "from memory".Write an interactive command-line battleship game Write a mouse interactive TUI for it Add a cli flag to connect to `ollama` and ask it to make guesses There's a bug: write the AI conversation to a file so I can show you Try some other models: make options for calling OpenAI and Anthropic GPT and Anthropic are throwing this error (it needed to switch APIs) The models aren't doing as well as they can: engage them more conversationally
What will LLMs do next year?
- mnky9800n 10 months agoI read these stories about using LLMs and I always wonder if it's survivor bias. Like I believe your experience. I've also had impressive results. But also a lot of times the ai gets lost and doesn't know what to do. So I'm willing to see it as a developer tool, but it's hard to see it become more general purpose in the next 6 months time frame people have been promising for the last two years.
- WhyOhWhyQ 10 months agoI played with it a year ago and it really hasn't improved much since then. I even had it produce a few things similar to your battle ship demo.
And next year I don't see it improving much either if the best idea anybody has it just to give it more data, which seems to be the mantra in ML circles. There's not an infinite supply of data to give it.
- ryandvm 10 months agoAbsolutely. I posted a similar experience developing a Chrome extension with GPT 4o in a hour or so when it would have taken me at least a day to do on my own. I have no idea how people are hand waving LLMs away as no big deal.
I think the only justification for such a position is if you are a graybeard with full mastery of a stack and that's all you work in. I've dealt with these guys over the years and they are indeed wizards at Rails or Django or what have you. In those cases, I could see the argument that they are actually more efficient than an LLM when working on their specialty.
Which I guess is the difference. I'm a generalist and I'm often working in technologies that I have little experience in. To me LLMs are a invaluable for this. They're like pair programming with somebody that has memorized all of Stack Overflow.
- ruszki 10 months agoWhere did you get that it can figure out things which was not feed into it (e.g. not on Stackoverflow)? In the past year, none could answer any of my questions, for which I couldn’t find anything on Google, in any reasonable ways. They failed very badly when there was no answer to my question, and the question should have been changed.
- mnky9800n 10 months ago
- BobbyJo 10 months ago> The question is what does programming with an LLM get you over batteries-included frameworks with scaffolding like Rails or Django?
You can use them on top of those frameworks. The point is, you + LLM is generally a way faster you no matter what tech you're using.
- makeitrain 10 months agoWe’re going to have AI building Drupal sites soon. The platform is well architected for this. Most of the work is generating configuration files that scaffold the site. There are already AI integrations for content. The surface area is relatively small, and the options are well defined in code and documentation. I would not be surprised if we pull this off first. It’s one of the current project initiatives.
The coding part is still a hard problem. AI for front end and module code is still pretty primitive. LLMs are getting more helpful with that over time.
- prisenco 10 months agoI have not seen evidence of LLM use making programming way faster. Both in my own work, or from the work of others who make this claim.
- makeitrain 10 months ago
- ynniv 10 months ago
- blobbers 10 months agoI spent some time trying to get chatgpt to write a front end in js. It would plot using a library and then when I complained about a bug it would say "Oh you're right, this library does not implement that commonly implemented method, instead use this code." and then would get in a circle of spitting out buggy code, fixing a bug, and then reintroducing an old bug.
It was okay, but kind of annoying. I understand js well enough to just debug the code myself, but I wanted it to spit out some boilerplate that worked. I can't remember if this was chatgpt omni, I was using or if it was still 3.5. It's been a short while.
Anyways, it is cool tech, but I don't feel like it offers the same predictive abilities as class ML involving fits, validation, model selection etc for very specific feature sets.
- Yhippa 10 months agoWhat you described was the exact same experience I had. I got so far off track in one of my conversations with corrections that I started all over again. It is neat that this technology can do it, but I probably would have been better off doing it manually to save time.
The other thing I’ve noticed is something you alluded to: the LLM being “confidently incorrect”. It speaks so authoritatively about things and when I call it out it agrees and corrects.
The more I use these things (I try to ask against multiple LLMs) the more I am wary of the output. And it seems that companies over the past user rushed to jam chatbots into any orifice of their application where they could. I’m curious to see if the incorrectness of them will start to have a real impact.
- SwiftyBug 10 months agoOne thing I noticed about this behavior of LLMs "seeing" their error when you correct them is that sometimes I'm not even correcting them, just asking follow up questions that they interpret as me pointing out some mistake. Example:
Me: - Write a Go function that will iterate over the characters of a string and print them individually.
~Claude spits out code that works as intended.~
Me: - Do you think we should iterate over runes instead?
Claude: – You are absolutely right! Sorry for my oversight, here's the fixed version of the code:
I just wanted to reason about possibilities, but it always takes my question as if I'm pointing out mistakes. This makes me feel not very confident in their answers.
- SwiftyBug 10 months ago
- Yhippa 10 months ago
- elicksaur 10 months ago>If you listen carefully it's almost always based on a lack of imagination.
I actually find things to be the opposite. My skepticism comes from understanding that what LLMs do is token prediction. If the output that I want can be solved by the most likely next token, then sure, that’s a good use case. I’m perfectly capable of imagining those cases. People who are all in on AI seem to not get this and go wild.
There’s a difference between imagination and magical thinking.
- ynniv 10 months agoMy disappointment comes from understanding that what humans do is keystroke prediction. If the output that I want can be solved by the most likely next keystroke, then sure, that’s a good use case. I’m perfectly capable of imagining those cases. People who are all in on humanity seem to not get this and go wild.
Don't mistake the "what" for the "how". What we ask LLMs to do is predict tokens. How they're any good at doing that is a more difficult question to answer, and how they are getting better at it, even with the same training data and model size, is even less clear. We don't program them, we have them train themselves. And there are a huge number of hidden variables that could be encoding things in weird ways.
These aren't n-gram models, and you're not going to make good predictions treating them as such.
- elicksaur 10 months ago> Like previous GPT models, the GPT-4 base model was trained to predict the next word in a document…
https://openai.com/index/gpt-4-research/
What humans do is materially different than that. When someone asks me a question, I don’t come up with an answer by thinking, “What’s the first word of my response going to be? The second word?…”
I understand that the AI marketing wants us to believe there’s more magic than that quote, but the actual technical descriptions of the models are what should be considered.
Also, skepticism =/= disappointment and swapping those out greatly changes what the sentence says about my feelings on the matter. Tech from OpenAI and friends can’t really disappoint me. I have no expectation that it won’t just be a money grab ;)
- elicksaur 10 months ago
- ynniv 10 months ago
- alisonatwork 10 months ago
- inerte 10 months agoUse ChatGPT or Claude for your day to day questions, technical or not. You'll quickly figure out in which areas using Google is still better. ChatGPT can probably do more than you think and handle more complex request than you're probably assuming.
Regarding your projects, either just brute force into an existing one, or start a new project. For the former, the purpose isn't to make the product better (exactly) but for you to learn. For the later, OpenAI and Anthropic APIs are good enough to mess around and build a lot of different things. Don't let analysis paralysis stop you, start messing around and finding out.
- godelski 10 months agoAs an ML researcher my advice for you is: don't
ML moves fast, but not as fast as you probably think. There's a difference between innovations in architectures and demonstrations of them in domains (both are useful, both are necessary research, but they are different).
Instead, keep up with what tools are relevant to you. If things are moving fast and aren't sticking, then in a way they aren't moving fast, are they? You're just chasing hype and you'll never keep up.
On the production side, I also see a common mistake of relying on benchmarks too heavily. I understand why this happens, but the truth is more nuanced than this. Just because something works well on a benchmark does not mean it will work well (or better than others) on your application. ResNet is still commonly used and still a great option for many applications. Not everything needs a 1B+ transformer. Consider your constraints: performance, compute, resource costs, inference time, and all that jazz. Right now if you have familiarity (no need for expertise) in FFNs (feed forward/linear), CNNs, ResNets, and Transformers, you're going to be fine. Though I'd encourage you to learn further about training procedures like GANs (commonly mistaken as an architecture), unsupervised pretraining (DINO), and tuning. It may be helpful to learn a high level of diffusion and LLMs, but it depends on your use cases. (And learn whatever you're interested in and you find passion in! Don't let need stop you, but if you don't find interest in this stuff, don't worry either. You won't be left behind)
If you aren't just integrating tools and need to tune models, then do spend time learning this and focusing on generalization. The major lessons learned here have not drastically changed for decades and it is likely to be that way. We do continue to learn and get better, but this doesn't happen in leaps and bounds. So it is okay if you periodically revisit instead of trying to keep up in real time. Because in real time, gamechangers are infrequent (of course everyone wants to advertise being a gamechanger, but we're not chasing every new programing language right?). Let the test of time reduce the noise for you.
This is normal. You can hamfist AI into anything, but that doesn't mean it is the best tool for the job. Ignore the hype and focus on the utility. there's a lot of noise and I am extremely sympathetic to this.> I most often can’t see any use case for AI/ML in our products
Look to solve problems and then right tool for the problem, don't look for problems to justify a tool (fine for educational purposes).
- conwy 10 months agoIf you're looking to maximise employability / pay scale, maybe you can do some small side projects, just enough to showcase curiosity/open-mindedness.
Examples:
- Build a useful bash script using ChatGPT prompts and blog about it
- Build a text summariser component for your personal blog using Xenova / Transformers.js
- Build an email reply bot generator that uses ChatGPT prompt with sentiment analysis (doesn't have to actually send email, it could just do an API call to ChatGPT and print the message to the screen).
Just a few small examples and maybe a course or two (e.g. Prompt Engineering for Developers) should look great.
However I question how many companies really care about it right now. Most interviews I've done lately didn't bring it up even once.
But that said, maybe in a few months or year or so it will become more essential for most engineers.
- whynotkeithberg 10 months agoDo you really think a useful bash script using ChatGPT prompts is worth blogging about? I'm genuinely asking. I've been wanting to start my blog back up I was always primarily a sysadmin, although I've had to move more into DevOps to keep with the times and instead of being more an SRE/sysadmin like I used to be I'm now DevOps meets sysadmin where I'm not helping write our companies application but I do everything else from CI/CD, monitoring, log dashboards, to creation of infrastructure using terraform, ansible etc.
So I don't want you to think my question was being sarcastic... I'm genuinely curious if you think this sort of thing would be a useful or interesting thing to blog about or only in the cases of a resume building thing?
- conwy 10 months agoI think this skill could save time in a very rushed business environment.
A while back I wrote a prompt to build a script that runs git-reflog to get a the list of distinct authors. After a few small tweaks I got it roughly working. This took about 1 hour. Writing it myself would have definitely taken multiple hours, especially having to learn the details of git-reflog.
But that said I think it's mainly resume-building. ChatGPT isn't going to overall transform our productivity.
- bigtunacan 10 months agoDid you get something done in an hour that would have taken multiple hours or not?
If you did then it transformed your productivity.
- bigtunacan 10 months ago
- ketzo 10 months agoI think programming-with-chatbot is still such a new skill that a concrete, well-written example is a very useful document!
- conwy 10 months ago
- whynotkeithberg 10 months ago
- stevofolife 10 months agoHere’s the plan:
Run the following models:
- Speech-to-text - Text-to-text - Text-to-speech - Text-to-image - Image-to-text - Text-to-video - Video-to-text
Start by integrating third-party APIs, and later switch to open-source models.
Implement everything using your preferred backend language. After that, connect it to a frontend framework of your choice to create interactive interfaces.
You want use your own data? Put it in a database and connect it to your backend, and run these models on your database.
Once you’ve done this, you’ll have completed your full stack development training.
- asp_hornet 10 months agoI think this is a great take. Those problems have traditionally been hard to solve in engineering and you can get pretty reliable solutions from just an api call.
- asp_hornet 10 months ago
- al_borland 10 months agoIt sounds like you are aware of a technology and in search of a problem. Don’t force it. Most things don’t need AI. Personally, I find it to be a turnoff when a product tries to force AI into a product that doesn’t need it. It makes me write off the entire thing.
I am in a similar position to you… I have a job they the application of AI to that job isn’t readily apparent. My posture during all this is to use AI products as they become available that are appropriate and help me, but ultimately I’m waiting for the market to mature so I can see if and how I should move forward once the bubble pops and directions are more clear. I have little interest in running on the hamster wheel that is bleeding edge AI development, especially when I already have a job that doesn’t need it.
- kukkeliskuu 10 months agoIn general it is good idea to avoid searching for a problem.
But in my experience the problem this in turn has a problem that you do not see the real problems you could solve with a piece of technology if you don't understand the technology.
So, sometimes, it makes sense to just start doing something with it. You will soon see potential uses, apply it, learn more, and overcome this hurdle.
Just do it without expecting any returns besides learning something new.
- kukkeliskuu 10 months ago
- ilaksh 10 months agoI'm going to answer more from an applied perspective than 'real' ML.
Hacker News is a good source for news.
As far as learning, you have to build something.
I suggest you just start with example code from the OpenAI or Anthropic website for using the chat completion API. They have Node.js code.
r/locallama on reddit is interesting.
On Youtube, see Matt Wolfe, Matthew Berman, David Shapiro. Not really developer-focused but will mention developments.
You can search for terms like 'AI Engineer' or "agentic" or "LangChain" on youtube also.
To get motivated, maybe play around with the replicate.com API. It has cut and paste examples and many interesting models.
More ideas: search for "Crew AI" on X/Twitter.
- heavyset_go 10 months agoIgnoring the hype, there are applications of ML that a suited for some general problems.
Would your product benefit from recommender systems, natural language input/output, image detection, summarization, pattern matching and/or analyzing large datasets?
If so, then maybe ML can help you out.
I'm of the opinion that if you need ML, you'll eventually realize it because the solutions to your problem you find will be served by applications of ML.
That's to say, while doing research, due diligence, etc, you will inevitably stumble upon approaches that use ML successfully or unsuccessfully.
- joshvm 10 months agoThe comments here are very focused on LLMs, which makes sense - that's where the hype is. If you really don't mind ignoring the nuts and bolts, you can treat all the large language models as black boxes that are getting incrementally better over time. They're not difficult to interact with from a developer perspective - you send text or tokens and you get back a text response.
It's definitely worth trying them out as a user just to see what they're capable (and incapable of). There are also some pretty interesting use cases for them for tasks that would be ridiculously complicated to develop from scratch and "it just works" (ignoring prompt poisoning). Think parsing and summarizing. If you're an app developer, look into edge models and what they can do.
Otherwise dip your toes in other model types - image classification and object recognition are also still getting better. Mobile image processing is driven by ML models at this point. This is my research domain and ResNet and UNet are still ubiquitous architectures.
If you want to be sceptical, ignore AI and read ML instead, and understand these algorithms are just another tool you can reach for. They're not "intelligent".
- f0e4c2f7 10 months agoI'm not sure why but it seems like most of the high quality AI content is on twitter. On average seems to be around ~4 months ahead of HN on AI dev approaches.
I would suggest following / reading people who talk about using Claude 3.5 sonnet.
Lots of people developing whole apps using 3.5 sonnet and sometimes cursor or another editor integration. The models are getting quite good now at writing code once you learn how to use them right and don't use the incorrect LLMs (a problem I often see in places other than twitter unfortunately.) They seem to get better almost weekly now too. Just yesterday Anthropic released an update where you can now store your entire codebase to call as part of the prompt at 90% token discount. Should make an already very good model much better.
Gumroad's CEO has also made some good YouTube content describing a lot of these techniques, but they're livestreams so there is a lot of dead air.
- kukkeliskuu 10 months agoI have been very skeptical of AI, the smell of hype is strong in this one.
This said, I have started some experiments and like in all hyped technologies, there is some useful substance there as well.
My recommendation is to start with some very simple and real, repetitive need, and do it with the assistants API.
I started by turning a semi-structured word document of 150 entries into a structured database table. I would have probably done it more quickly by hand, but I would not have learned anything that way.
I think the sweet spot for generative AI right now is not in creative jobs (creating code, creating business communication , content creation etc.) but in mundane, repetivive things, where using generative AI seems like overkill at first.
- geuis 10 months agoI'm a full stack eng that recently transitioned to a full time backend role.
I'd suggestion learning about pgvector and text embedding models. It seems overwhelming at first but in reality the basic concepts are pretty easy to grok.
pgvector is a Postgres extension, so you get to work with a good traditional database plus vector database capabilities.
Text embeddings are easy to work with. Lots of models if you want to do it locally or adhoc, or just use OpenAI or GCP's api if you don't want to worry about it.
This combo is also compatible with multiple vendors, so it's a good onboarding experience to scale in to.
- 10 months ago
- supratims 10 months agoIn our org (large US bank) there has been a huge rollout of github copilot and adoption has been very successful. For me it became essential to learn how this can help me in day to day coding/testing/devops etc. Right now I am churning out a python code to parse a csv and create a report. I have never learnt python before.
- kredd 10 months agoI'm in a similar boat as well, and most of the time I just make sure to take the new models for a test drive. Just trying to use them here and there as well, figuring out their capabilities and shortcomings. Makes it much easier to smell the vapourware when I hear the news.
- adamnemecek 10 months agoCurrent ML will be replaced by something fundamentally different. We are in the pre-jquery days of ML.
- bityard 10 months agoAre you looking to _use_ AI/LM or take up an interest in developing or deploying AI/LM? Because those are very different questions.
Offtopic, but today I encountered my first AI-might-be-running-the-business moment. I had a helpdesk ticket open with IT for an issue with my laptop. It got assigned to a real person. After a few days of back-and-forth, the issue was resolved. I updated the ticket to the effect of, "Yup, I guess we can close this ticket and I will open a new one if it crops up again. Thank you for your patience and working with me on this." A few seconds later, I get an email saying that an AI agent decided to close my ticket based on the wording of my update.
Which, you know, is fine I guess. The business wants to close tickets because We Have Metrics, Dammit. But if the roles were reversed and I was the help desk agent, seeing the note of gratitude and clicking that Resolved button would very likely be the only little endorphin hit that kept me plugging away on tickets. Letting AI do ONLY the easy and fun parts of my job would just be straight-up demoralizing to me.
- kukkeliskuu 10 months agoI am running a fairly popular website, which has many unsophisticated users. There is a FAQ and instructional videos, but in general people do not read or understand the instructions. Then they send me email or call me. I spend lots of times repeating the same answers. I have been developing a support system that answers these basic questions based on the FAQs, and if it does not know how to respond, it tries to summarize the issue, and sends the issue to me. I am surprised how well it functions. I don't get endorphin hits from repeating same stuff for people.
- kukkeliskuu 10 months ago
- rldjbpin 10 months agothere seems to me tons of gatekeeping in this field to put it blunt. but if you want to mix full stack with AI/ML use cases, i think it might be a good idea to just keep track of high-level news and services that allow you to interface between the latest functionalities with an app.
there is enough space for creating user experiences built on top of existing work instead of learning how the sausage is made. but i pray to you to not just stick to text/LLM
- darepublic 10 months agoTry reading some material on deep learning; try out open source ai libs like detectron2 on cloud gpu servers. (ie colab). Learn some python, including env set up.
- dasven 10 months agoPlay first, try things out, ideas will bubble up, trust your own ingenuity
- djaouen 10 months agoI find that I have no need for AI or ML. I just don't use it.
- pseudocomposer 10 months agoThere's two sides to this: using AI/ML/LLMs to augment your own development ability, and using AI/ML/LLMs within apps you build.
For the former side, Copilot-type implementations are pretty intuitively useful when used this way. I find it most useful as an autocomplete, but the chatbot functionality can also be a nice, slightly-better alternative to "talk to a duck when you're stuck." That said, I'll focus on the latter side (using AI/ML in your actual work) from here.
Generalized AI/ML/LLMs are really just a "black box API" like any of the others in our toolbelt already, be they Postgres, Redis, SSE, CUDA, Rails, hell, even things like the filesystem and C atop assembly. We don't need to know all the inner workings of these things, just enough to see how to use the abstraction. You probably take when to use a lot of these things for granted at this point, but the reason we use any of these things is that they're good for the specific problem at hand. And LLMs are no different!
What's important to recognize is the types of problems that LLMs are good for, and where to integrate them into your apps. And, well, a pretty obvious class of this is parsing plain text into structured data to be used in your app. This is pretty easy to prompt an LLM to do. OpenAI and WebLLM provide a pretty straightforward common set of APIs in their NPM libraries (and other language bindings are pretty similar). It's far from a "standard," but it's definitely worthwhile to familiarize yourself with how both of these work.
For an example, I've made use of both OpenAI and WebLLM in an "Event AI" offshoot to my social media app [1], parsing social media events from plaintext (like email list content, etc.); feel free to test it and view the (AGPL) source for reference as to how I'm using both those APIs to do this.
For projects where you actually have money to spend on your LLM boxes, you'll probably do this work on the BE rather than the FE as demoed there, but the concepts should transfer pretty straightforwardly.
If you're interested in really understanding the inner workings of LLMs, I don't want to discourage you from that! But it does seem like really getting into that will ultimately mean a career change from full-stack software engineering into data science, just because both have such a broad base of underlying skills we need to have. I'm happy to be wrong about this, though!
[1] Source: https://github.com/JonLatane/jonline/blob/main/frontends/tam... | Implementation: https://jonline.io/event_ai
- hansvm 10 months ago> suspect it is easier to see opportunities when you have some experience working with the tools
Yes, absolutely. The most effective we I know to develop that sort of intuition (not just in AI/ML, but most subjects) is to try _and fail_ many times. You need to learn the boundaries of what works, what doesn't, and why. Pick a framework (or, when learning, you'd ideally start with one and develop the rest of your intuition by building those parts yourself), pick a project, and try to make it work. Focus on getting the ML bits solid rather than completing products if you want to get that experience faster (unless you also have no "product" experience and might benefit from seeing a few things through end-to-end).
> stay relevant in the long run
Outside of the mild uncertainty in AI replacing/changing the act of programming itself (and, for that, I haven't seen a lot of great options other than learning how to leverage those tools for yourself (keep in mind, most tasks will be slower if you do so, so you'll have a learning curve before you're as productive as before again; you can't replace everything with current-gen AI), and we might be screwed anyway), I wouldn't worry about that in the slightest unless you explicitly want to go into AI/ML for some reason. Even in AI-heavy companies, only something like 10% of developers tangentially touch AI stuff (outside of smallish startups where small employee counts admit more variance). Those other 90% of jobs are the same as ever.
> keep up my learning in these areas
In addition to the general concept of trying things and failing, which is extremely important (also a good way to learn math, programming, and linguistics), I'd advise against actively pursuing the latest trends until you have a good enough mentor or good enough intuition to have a feel for which ones are important. There are too many things happening, there's a lot of money on the line, and there are a lot of people selling rusty pickaxes for this gold rush (many intentionally, many because they don't know any better). It'll take way too much time, and you'll not have a good enough signal-to-noise ratio for it to be worth it.
As one concrete recommendation, start following Yannic Kilcher on YouTube. He covers most of the more important latest models, papers, and ideas. Most of his opinions in the space are decent. I don't think he produces more than an hour per day or so of content (and with relatively slow speaking rates (the thing the normal YT audience wants), so you might get away with 2x frame rate if you want to go a bit faster). Or find any good list of "foundational" papers to internalize (something like 5-20). Posting those is fairly common on HN; find somebody who looks like they've been studying the space for awhile. Avoid advice from big-name AI celebrities. Find a mentor. The details don't matter too much, but as much as possible you'd like to find somebody moderately trustworthy to borrow their expert knowledge to separate the wheat from the chaff, and you'll get better results if their incentive structure is to produce good information rather than a lot of information.
Once you have some sort of background in what's possible, how it works, performance characteristics, ..., it's pretty easy to look at a new idea, new service, new business, ..., and tell if it's definitely viable, maybe viable, or full of crap. Your choice of libraries, frameworks, network topologies, ..., then becomes fairly easy.
>> other people saying to build something simple with LLMs and brag about it
Maybe. Playing with a thing is a great way to build intuition. That's not too dissimilar from what I recommended above. When it comes to what you're telling the world about yourself though, you want to make sure to build the right impression. If you have some evidence that you can lightly productize LLMs, that's in-demand right this second. If you publish the code to do so, that also serves as an artifact proving that you can code with some degree of competency. If you heavily advertise LLMs on your resume, that's also a signal that you don't have "real" ML experience. It'll, ideally, be weighed against the other signals, but you're painting a picture of yourself, and you want that picture to show the things you want shown.
> can't see any use case for AI/ML
As a rule of thumb (not universal, but assuming you don't build up a lot of intuition first), AI/ML is a great solution when:
(1) You're doing a lot of _something_ with complicated rules
(2) You have a lot of data pertaining to that _something_
(3) There exists some reason why you're tolerant of errors
I won't expand that into all the possible things that might mean, but I'll highlight a couple to hopefully help start building a bit of intuition right away:
(a) Modern ML stuff is often written in dynamic languages and uses big models. That gives people weird impressions of what it's capable of. At $WORK we do millions of inferences per second. At home, I used ML inside a mouse driver to solve something libinput struggled with and locked up handling. If you have lot of data (mouse drivers generate bajillions of events), and there's some reasonable failure strategy (the mouse driver problem is just filtering out phantom events; if you reject a few real events per millisecond then your mouse just moves 0.1% slower or something, which you can adjust in your settings if you care), you can absolutely replace hysterisis and all that nonsense with a basic ML model perfectly representing your system. I've done tons of things beyond that, and the space of opportunities dwarfs anything I've written. Low-latency ML is impactful.
(b) Even complicated, error-prone computer-vision tasks can have some mechanism by which they're tolerant of errors. Suppose you're trying to trap an entire family of wild hogs at once (otherwise they'll tend to go into hiding, produce a litter of problems, and never enter your trap again since they lost half their family in the process). You'd like a cheap way to monitor the trap over a period of time and determine which hogs are part of the family. Suppose you don't close the trap when you should have. What happens? You try again another day; no harm, no foul. Suppose you did close it when you shouldn't have? You're no worse off than without the automation, and if it's even 50-80% accurate (in practice you can do much, much better) then it saves you countless man-hours getting rid of the hogs, potentially taking a couple tries.
(c) Look at something like plant identification apps. They're usually right, they crowd-source photos to go alongside predictions, they highlight poisonous lookalikes, the prediction gives lists and confidences for each prediction, and the result is something easy for a person to go investigate via more reliable sources (genus, species, descriptions of physical characteristics, ...). I'm sure there exists _somebody_ who will ignore all the warnings, never look something up, and eat poison hemlock thinking it's a particularly un-tasty carrot, but that person probably would have been screwed by a plant identification book or a particularly helpful friend showing them what wild carrots look like, and IMO the app is much easier to use and more reliable for everyone else in the world, given that they have mechanisms in place to handle the inevitable failures.
- freecoins24 10 months ago[dead]
- aitooltrek-com 10 months ago[flagged]
- TreasurePalace 10 months ago[flagged]
- 10 months ago