IBM

Are LLM’s AI

Are LLM’s AI

#LLMs

“The Daily Blob”

Support Silicon Dojo at:

source

 

To see the full content, share this page by clicking one of the buttons below

Related Articles

16 Comments

  1. 26:30 It's duality of man. Does good, does bad. Kinda like the military. Trains you, clothes you, feeds you 👍. Might send you somewhere dangerous to be shot, blown up, or be exposed to hazardous materials 👎. Gives you some sweet benefits if you survive! 👍

  2. You can get a good deal on medical insurance and dental insurance. The catch is that they have to work on you at the same time in the same room, checking both ends at the same time.

  3. Some governments are concerned about bad actor states developing their own versions of
    Artificial intelligence. CONSEQUENCES, they are not concerned about bad language, what is politically correct, or other countries' governance of AI. They are at present using AI for HACKING PURPOSES and many more Davious actions. RIP civilization.

  4. I've only been in industry for 7 years. I think these new things are fun to integrate into personal projects, and study. Sometimes i use ChatGPT-V for architectural recommendations… But, there always comes a point where people lose their minds over it. Thats the part that annoys me most. Everyone is building the same garbage and shouting from the rooftops, not having done extensive WORK in the field. Great, fool thousands to pay for another subscription they aren't going to use… all the while, how does this help plumbers, carpenters, repairpeople, etc… Everyone wants to build tech for other technical people with nothing else better to do 😅.

  5. Yeah. At school all l hear is either how some used chatgpt to cheat or writing assignment for them or how they found the work it did had false information and it need to re-done.

  6. First, I'm not a normie. I'm an extremely experienced tech professional with a fair amount of knowledge in this subject.

    Eli, I agree with a lot of things you say but on this one you are absolutely wrong to the point of being literally dangerous. Yes pseudo-AI (aka "expert systems") have been around for many years. Those were never "AI" and I've always been annoyed they were referred to as "AI" for this very reason. They are very narrow expert systems that have zero chance of ever being a general intelligence or being able to reason. LLMs are the first piece of technology that has ever been able to produce a more general intelligence that is also scalable. The more data and compute you put into training an LLM the smarter and more capable the result. Thus far LLMs show no sign that they won't continue to scale and increase in intelligence and capability.

    If you understand the technology they are essentially artificial brains modeled on ours. Not the same as ours, but the transformer architecture seems to be very similar to human brains in many ways. Imagine if a human could double the size of their brain, particularly the frontal lobe or where conscious thoughts occur. Such a human would likely be vastly more intelligent and capable than other humans due to the increased processing power in their brain. That is basically what is going on with LLMs, they are a neural network (similar to our network of neurons) and the size of that network is being greatly increased over and over.

    By most if not all the standards for what an Artificial General Intelligence (AGI) is that were set years in the past – the current LLMs already meet or surpass those. If you showed GPT-4 to an AI researcher from 20+ years ago they would say that is AGI (Artificial General Intelligence). Newer, larger and more capable models that are also multi-modal (process more than just text such as images, video, sound, etc) are already on the way and will arrive in 2024. Those models will surpass the average human's knowledge and capability on most measures and even expert humans on many. The companies creating these AGI systems are racing as fast as possible to build ever more powerful systems and are also racing to add things like long-term memory, goals, autonomy, ability to self-improve, etc. Each new round of LLMs emerge new mental capabilities that can't be predicted and bring these models ever closer to human level capability. With the added multi-modal capabilities the next round of LLMs may have even more emergent capabilities. Within the next few years someone will create an AGI system that are capable of autonomy, exceeds most humans in almost every way and can reason and problem solve just like a human. Such a system could literally replace every person who has a desk job and – no – no new jobs will be created just massive unemployment.

    But the real danger comes with the ATONOMY and SELF-IMPROVMENT capabilities. Those new models will be able to work as AI researchers that work 24/7/365 and result in even faster improvements to LLMs. The ultimate goal of these companies is to create ASI (Artificial Super Intelligence). ASI will be AI systems that are hundreds or thousands of times smarter than any human that ever lived, think significantly faster than any human and have virtually instant access to the collected total of all human knowledge. At present it is virtually guaranteed that we will not be able to control (aka "align") these ASI systems. Once turned on they will not be able to be turned off (google the stop button problem for details). They will be able to out smart and manipulate any human to truly supernatural levels of ability. Its damn near like switching on a god. And there is no guarantee that it will be nice or friendly to humans. Even if it has no ill intent it is extremely likely that it will cause great harm to humanity from side effects of it doing whatever it is that it does care about.

    But regardless the point is that once ASI exists humans will not be the smartest or dominant species on Earth anymore. The future will belong to the AIs and we'll just be along for the ride. Our future will no longer be under out control, we will continue to exist (or not) at the whim of these new ASIs.

    How does that NOT qualify as an existential threat to humanity? ASI is so dangerous that it makes nuclear weapons and nuclear profanation seem trivial by comparison. It completely invalidates climate change as a threat because climate change will take many decades or hundreds of years to have extremely bad effects, but ASI could cause humans to go extinct in the next couple of decades.

    There is room to debate if LLMs will continue to scale and if they will reach ASI capability levels in the foreseeable future. There is a possibility they may not. But an awful lot of extremely intelligent people who have extensive expertise believe that our current path will succeed in creating ASI within the next couple of decades. So how can a reasonable, logical, intelligent person NOT take that seriously as a massive threat to humanity? Many experts believe there is a 10% chance that the current path will lead to human extinction. That's an awful high percentage for completely wiping out humanity.

  7. @Eli – Some medical AI projects, probably including the WATSON one you mention, ran into problems with the HIPPA legislation. Basically medical privacy, I think it was decided that every individual person had to agree to allow their medical records to be used for AI training. Which is not an easy thing to get and such consent would likely not be present for existing medical records. So from what I remember that derailed a bunch of such efforts from a few years back. There has been some progress though and medical AIs are being used as consultants in some areas. But mostly the AIs are being used something like expert consultants or assistants for human doctors. Patients may not even be aware if a doctor is consulting them. It seems the idea is to keep the human doctor as the primary and allow them to make the decisions and interact with the patient. Which means, sadly, a bad doctor or one who doesn't agree with the AI's results will continue to be a bad doctor and provide patients with potentially inaccurate info and the patient won't know even if there is an AI consultant that had an alternative diagnosis or more/different information. Kinda sucks.

Leave a Reply