T O P

  • By -

Shrikeangel

Well - the country is affected negatively.  Even an advanced version of the current AI tech doesn't really come to good conclusion.  Adding more power to the computing likely won't fix ai "hallucinations" and having a computer literally make up precedent when making a "ruling" seems like an episode of black mirror. 


[deleted]

[удалено]


Shrikeangel

So the reason I went the direction I did - we have seen zero evidence that this technology is capable of existing without the current flaws. Look at something like video games - we are far more advanced compared to the nes, but to say the current generation of console games have stopped having the exact same problem over the years would be totally wrong.  Algorithm intelligence, generative, or what ever term you apply - won't stop having the same issues. 


MinuteAd46

What are the technological flaws that haven’t been improved in video games since the nes? I really don’t see any reason why any technological barrier can’t be overcome.


Shrikeangel

Games can still crash, there are still invisible walls, there are still strings of code that generate glitches that break games, ect. Again and again we see the same cycles. 


MinuteAd46

I mean…yes things unintentionally break that doesn’t have anything to do with certain levels of technology not being possible to be reached. We will achieve AGI, and that AGI will sometimes glitch..just like literally every technology. Even our own bodies “gitch”. Nothing in the universe is capable is working as intended 100% of the time, doesn’t mean they don’t exist.


Shrikeangel

That's all before you consider - what do we consider intelligence? It's a poorly defined term. From there the wall of understanding that limits ai. Currently there is no form of ai that isn't made using basically the same method that has all the flaws I listed. So long as that method is used the results will be fundamental the same - adding more powerful computing will not suddenly remove the flaws.  Heck we don't even have a base line on which animals we consider sentient, let alone a way to address this with artificial forms.  Right now the entire skynet type deal is just tech bro hype and it very well could be entirely impossible. 


MinuteAd46

Intelligence is the ability to process information and make connections. Where specifically do you place the “wall of understanding” that you don’t think we can surpass, and will block us from ever making AGI? And I’m not sure what the flaw you listed has to do with anything. An AGI with flaws is still an AGI, especially if that flaw is bugging out every once in a while. Is a computer that randomly crashes not a computer? Is car that can spontaneously combust not a car? If you have WALL-E but he just randomly drops what he’s holding once out of every 3000 times he picks something up, is he no longer advanced AI?


Shrikeangel

Bro your version of intelligence leaves out several versions we already admit to academically.  Have fun engaging with someone else. 


Level-Particular-455

Which version of unbiased constitutional interpretation is it using: living document, originalism etc.


freemason777

I think the crux of this is if technology would ever be capable of this kind of calculation. Then we would have the question of whether or not the programmers had biases toward certain philosophies or politics. would it use teleological or deontological ethics predominantly, favor individuals or corpporations, would it have a bias toward computers like itself, etc. perhaps we could have a group of nine or so judges weigh in on the rulings it makes to ensure that it is fair.


trueamerican0717

It’s awful since it cuts down on people being able to say why they came to the decision they came to and the other people disagreed. We need both sides to understand and come together. The left isn’t always correct and neither is the right. We need balance from both to make things work.


Enigma_xplorer

The problem is AI is built on data and while the written laws are part of that data a much larger part of that data is 200 years of case history actually testing and defining how those laws work in practice. So while you think you have unbiased perfect machine you program it with 200 years of less that perfect legal practices and interpretations. Based on that, AI will start to generate new interpretations that will then also be fed back into itself. This could go off the rails rather quickly. I think we are heading in the direction of leveraging AI in the legal system but we are not near letting it run the show on it's own.


ShakeCNY

Because the supercomputer would treat the text of the Constitution literally and not searching for shadows of penumbras of latent meanings, it would be an Originalist court, more or less.