First of all, yeah, it is getting better but I'm not sure it's gonna keep getting significantly better for years to come. The "singularity" requires a leap underestimated by many AI evangelists.
Second, even if that happens, it is fundamentally wrong to claim AI can make better laws or music than humans. If you want a computer to do something, you gotta quantify what "good" means in some way. How do you train a computer to write "good" music? Even worse, how do you make it right a "good" law? The computer doesn't know what that means. Good at achieving what? Whatever the result the computer gives you, it's only gonna be as good as the idea of the person who trained the computer of the meaning of "good".
it’s only gonna be as good as the idea of the person who trained the computer of the meaning of “good”.
training isn't this precise any more. they feed datasets that are much too large for humans to actually annotate (like a scraped copy of the entire public internet for the last round of NLP algorithms) and instead look for things like coherence, the capacity of the trained algorithm to learn from a few examples (where the kind of good/bad selection you're thinking of actually takes place), etc., then adjust parameters and try again until they get something stable out (most of the outputs are only good for a few rounds of Q&A before they devolve into incoherence). this difference is supervised learning (what you're thinking of) vs unsupervised learning (what's becoming the only practical way to train many algorithms).
to see what this feels like in practice, browse this collection of samples pulled out of GPT-3, an algorithm that has only received unsupervised training (the option to give it supervised refinement is not yet available). the people training it most definitely could not have intended much of what shows up here. many of the examples are not great, but there's also way more gems here than you'd expect from something trained only on meta factors, rather than on specific kinds of outputs for specific questions.
I'll link stuff that stood out to me if anyone is interested.
This doesn't matter. The basic idea is the same. I am not thinking of supervised or unsupervised learning, because the central difficulty is the same. You're describing methods, but behind all the terminology at the end of the day there is still someone making a value judgement at some point in the procedure, no matter how obscured that judgement might be, and that judgement is fundamental to the result you are going to get, no matter how good your algorithm is. Cool, so you have an AI that makes "good" music. Good according to whom? Because whatever the idea of musical value of someone who listens to Five Finger Death Punch 24/7 is, it's probably not my idea.
Now this isn't so bad when it comes to music. But laws? If you ask 10 people what they think laws should be achieving you'll get 11 different answers, but whatever you decide the right answer is, it's gonna be applied to all of them.
there is still someone making a value judgement at some point in the procedure
sure, I'm saying the values involved are getting increasingly abstract.
Cool, so you have an AI that makes “good” music. Good according to whom?
this is why I linked the page of examples. the answer is that it's according to the person asking the question (which is new, it didn't use to be this way).
Now this isn’t so bad when it comes to music. But laws? If you ask 10 people what they think laws should be achieving you’ll get 11 different answers, but whatever you decide the right answer is, it’s gonna be applied to all of them.
who's saying AI should write law right now...? I'm pointing out that there's more capacity here than leftists generally give credit for and that that capacity can be used for good and bad ends.
That is what I am saying, it is not a matter of right now, it is a matter of "ever". It is a difficulty not resolved by better algorithms. It is a fundamental difficulty that inherently limits its scope, EVEN if the technology actually has the capacity to get there any time soon, which is not a given, unless AI can evolve to improve AI algorithms significantly which isn't a given either, and even if it does it is again not a given that it won't cap out once more.
my objection to AI writing laws isn't really about the technology -- maybe it can get to a point where it might make sense, maybe it can't, but it's immaterial. the politics of the person who says AI should write laws is kind of questionable. the hard part about laws, about politics isn't a technical matter of finding the cleverist solution or whatever, it's the hard work of convincing actual human beings that they should support the law. outsourcing that to AI does nothing to solve that problem except perhaps in a world where we've built a cult around AI and people unquestioningly believe what an AI tells them.
technology, no matter how clever or powerful, can't solve political problems.
Exactly, that is why I believe it to be a fundamental limitation which won't be solved by better technology. I also have a similar reason to disagree with some people who think AI will replace musicians, though there is also other very important factors that people overlook.
First of all, yeah, it is getting better but I'm not sure it's gonna keep getting significantly better for years to come. The "singularity" requires a leap underestimated by many AI evangelists.
Second, even if that happens, it is fundamentally wrong to claim AI can make better laws or music than humans. If you want a computer to do something, you gotta quantify what "good" means in some way. How do you train a computer to write "good" music? Even worse, how do you make it right a "good" law? The computer doesn't know what that means. Good at achieving what? Whatever the result the computer gives you, it's only gonna be as good as the idea of the person who trained the computer of the meaning of "good".
training isn't this precise any more. they feed datasets that are much too large for humans to actually annotate (like a scraped copy of the entire public internet for the last round of NLP algorithms) and instead look for things like coherence, the capacity of the trained algorithm to learn from a few examples (where the kind of good/bad selection you're thinking of actually takes place), etc., then adjust parameters and try again until they get something stable out (most of the outputs are only good for a few rounds of Q&A before they devolve into incoherence). this difference is supervised learning (what you're thinking of) vs unsupervised learning (what's becoming the only practical way to train many algorithms).
to see what this feels like in practice, browse this collection of samples pulled out of GPT-3, an algorithm that has only received unsupervised training (the option to give it supervised refinement is not yet available). the people training it most definitely could not have intended much of what shows up here. many of the examples are not great, but there's also way more gems here than you'd expect from something trained only on meta factors, rather than on specific kinds of outputs for specific questions.
I'll link stuff that stood out to me if anyone is interested.
This doesn't matter. The basic idea is the same. I am not thinking of supervised or unsupervised learning, because the central difficulty is the same. You're describing methods, but behind all the terminology at the end of the day there is still someone making a value judgement at some point in the procedure, no matter how obscured that judgement might be, and that judgement is fundamental to the result you are going to get, no matter how good your algorithm is. Cool, so you have an AI that makes "good" music. Good according to whom? Because whatever the idea of musical value of someone who listens to Five Finger Death Punch 24/7 is, it's probably not my idea.
Now this isn't so bad when it comes to music. But laws? If you ask 10 people what they think laws should be achieving you'll get 11 different answers, but whatever you decide the right answer is, it's gonna be applied to all of them.
sure, I'm saying the values involved are getting increasingly abstract.
this is why I linked the page of examples. the answer is that it's according to the person asking the question (which is new, it didn't use to be this way).
who's saying AI should write law right now...? I'm pointing out that there's more capacity here than leftists generally give credit for and that that capacity can be used for good and bad ends.
That is what I am saying, it is not a matter of right now, it is a matter of "ever". It is a difficulty not resolved by better algorithms. It is a fundamental difficulty that inherently limits its scope, EVEN if the technology actually has the capacity to get there any time soon, which is not a given, unless AI can evolve to improve AI algorithms significantly which isn't a given either, and even if it does it is again not a given that it won't cap out once more.
my objection to AI writing laws isn't really about the technology -- maybe it can get to a point where it might make sense, maybe it can't, but it's immaterial. the politics of the person who says AI should write laws is kind of questionable. the hard part about laws, about politics isn't a technical matter of finding the cleverist solution or whatever, it's the hard work of convincing actual human beings that they should support the law. outsourcing that to AI does nothing to solve that problem except perhaps in a world where we've built a cult around AI and people unquestioningly believe what an AI tells them.
technology, no matter how clever or powerful, can't solve political problems.
Exactly, that is why I believe it to be a fundamental limitation which won't be solved by better technology. I also have a similar reason to disagree with some people who think AI will replace musicians, though there is also other very important factors that people overlook.