<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Notes on Scroll Wheel</title><link>https://scrollwheel.net/notes/</link><description>Recent content in Notes on Scroll Wheel</description><generator>Hugo</generator><language>en-us</language><atom:link href="https://scrollwheel.net/notes/index.xml" rel="self" type="application/rss+xml"/><item><title/><link>https://scrollwheel.net/notes/cuts/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://scrollwheel.net/notes/cuts/</guid><description>&lt;p&gt;The LLM is just taking all these resources that you could have found yourself and mashing them into a response with a friendly human tone. Similarly, in my field, it is similar with generating code. Before AI, I would go on stack overflow, read official documentation, or find a technical blog. Now an LLM that has been trained on all of these resources and more uses them to generate the response. Although, the addage &amp;ldquo;don&amp;rsquo;t copy code straight from stackoverflow&amp;rdquo; didn&amp;rsquo;t carry into the AI age.&lt;/p&gt;</description></item></channel></rss>