I've been forced to reckon with generative LLMs lately. For me, it is easy and natural to think in abstract terms when it comes to programming, and related things like setting up and structuring a database etc, but I've always hated doing the work. It has always been something I've forced myself to do in order to build something, for work or whatever. I find it repetitive and boring.
Now I'm finding that I can use code helpers built on generative LLMs to get things done so quickly, and to do things I wouldn't even attempt before. I'll be honest, I've taken some pleasure in solving a problem more cleanly than people who are much better at coding (and who enjoy it as an intellectual challenge etc). I've been able to skip their "gatekeeping" because I can just implement the solution I want by being very specific in my instructions to the chatbot, understanding every step, but having "it" do the menials tasks of working out the internal logic and syntax etc. I feel like it's given me a chance to "prove" concepts I was previously unable to set into motion due to being unwilling/unable to work out the technical details of the components.
The linguist in me is conflicted. The formalisation of language (in combination with the massive and arguabily grossly unethical data collection) that these programs are built on does not at all reflect my views on language, what it "is" (both in and out of "context") or what a fruitful and inclusive line of inquiry for linguistics as a field would/should be. But I'll be damned if chatbots aren't like having some super eager, super knowledgeable, beyond devoted sort of socially stunted helper. For controlled use (knowing exactly what you are building, and how), I find it just irresistible at the moment.
Not sure if this is me crossing to the dark side or what.
I've been forced to reckon with generative LLMs lately. For me, it is easy and natural to think in abstract terms when it comes to programming, and related things like setting up and structuring a database etc, but I've always hated doing the work. It has always been something I've forced myself to do in order to build something, for work or whatever. I find it repetitive and boring.
Now I'm finding that I can use code helpers built on generative LLMs to get things done so quickly, and to do things I wouldn't even attempt before. I'll be honest, I've taken some pleasure in solving a problem more cleanly than people who are much better at coding (and who enjoy it as an intellectual challenge etc). I've been able to skip their "gatekeeping" because I can just implement the solution I want by being very specific in my instructions to the chatbot, understanding every step, but having "it" do the menials tasks of working out the internal logic and syntax etc. I feel like it's given me a chance to "prove" concepts I was previously unable to set into motion due to being unwilling/unable to work out the technical details of the components.
The linguist in me is conflicted. The formalisation of language (in combination with the massive and arguabily grossly unethical data collection) that these programs are built on does not at all reflect my views on language, what it "is" (both in and out of "context") or what a fruitful and inclusive line of inquiry for linguistics as a field would/should be. But I'll be damned if chatbots aren't like having some super eager, super knowledgeable, beyond devoted sort of socially stunted helper. For controlled use (knowing exactly what you are building, and how), I find it just irresistible at the moment.
Not sure if this is me crossing to the dark side or what.