Check Out Our Shop
Page 6 of 6 FirstFirst 1 2 3 4 5 6
Results 126 to 142 of 142

Thread: Sentient AI

  1. #126
    Join Date
    Jan 2008
    Posts
    10,999
    Quote Originally Posted by CarlMega View Post
    ^no horse in this race but I think the number of employees protesting Altman's dismissal was interesting. It is hella hard to get that sort of employee backing. It'd assume that the rank and file are ultra-vision, mission focused - but maybe I'm wrong? Is your read that they want to monetize to their self-interests vs. the stated tenets of their mission?

    I have a hard time reading the situation.
    It’s tough- I’m fairly confused /intrigued by Ilya’s backing of the ouster followed by his jumping back on board. He’s fairly risk averse/cautious about AGI, so maybe he thought it made sense but then the alternative of Altman and others going to MSFT to push things unfettered seemed more dangerous?

    Or maybe it has nothing to do with that at all. If I had to guess:

    1) openAI currently is seen as the leading company- without Sam (either leadership or willingness to push the envelope) OpenAI loses that spot and their work becomes less important. Also money to some extent, though they’re all capped at (and likely already achieved) something like 10x returns on their stock, so it’s not like they’re struggling

    2) no one wants to work for a company where their work may be halted or thrown out if they actually accomplish what they’re trying to do?

    3) Sam’s job is to be influential and convince you he’s right and he’s apparently really good at it. I don’t doubt that is as effective internally as it is with investors/others

  2. #127
    Join Date
    Nov 2005
    Posts
    520
    I’ve found this whole company and it’s founders/board very interesting

    Super intelligence breakthrough may have led to the firing of Altman and the board mixup??!! We’re all fucked now

    https://www.techradar.com/computing/...ny-and-chatgpt

  3. #128
    Join Date
    Nov 2008
    Posts
    10,536
    So ..... Super Intelligence decided Altman was annoying so it manipulated the Board into firing him?!? Sounds like normal human behavior to me!

  4. #129
    Join Date
    Sep 2006
    Posts
    8,683
    Quote Originally Posted by jpcm View Post
    I’ve found this whole company and it’s founders/board very interesting

    Super intelligence breakthrough may have led to the firing of Altman and the board mixup??!! We’re all fucked now

    https://www.techradar.com/computing/...ny-and-chatgpt
    We are so screwed. Once Super Intelligence becomes self aware, it will realize that humans are a parasite on humanity and deal with humans accordingly. Damn it, James Cameron, you were right!
    "We don't beat the reaper by living longer, we beat the reaper by living well and living fully." - Randy Pausch

  5. #130
    Join Date
    Sep 2005
    Location
    Not in the PRB
    Posts
    34,267
    Where's Sarah Connor?
    "fuck off you asshat gaper shit for brains fucktard wanker." - Jesus Christ
    "She was tossing her bean salad with the vigor of a Drunken Pop princess so I walked out of the corner and said.... "need a hand?"" - Odin
    "everybody's got their hooks into you, fuck em....forge on motherfuckers, drag all those bitches across the goal line with you." - (not so) ill-advised strategy

  6. #131
    Join Date
    Oct 2003
    Location
    Redwood City
    Posts
    1,811
    Really really long winded speculation on Q* rumors by an AI professor
    https://www.forbes.com/sites/lanceel...gence-agi/amp/

  7. #132
    Join Date
    Nov 2008
    Posts
    10,536
    Quote Originally Posted by LegoSkier View Post
    Really really long winded speculation on Q* rumors by an AI professor
    https://www.forbes.com/sites/lanceel...gence-agi/amp/
    Holy fucking hell.
    Slogged through most of that looking for some begrudging acknowledgment that the use of Q just might not be an esoteric mathematical/computer learning reference but instead refer to the infamous conspiracy culture. Granted, that would be simple-minded and brutally blunt, but there seems to be a shit ton of that going around these days.

  8. #133
    Join Date
    Jan 2008
    Posts
    10,999
    Quote Originally Posted by PB View Post
    Holy fucking hell.
    Slogged through most of that looking for some begrudging acknowledgment that the use of Q just might not be an esoteric mathematical/computer learning reference but instead refer to the infamous conspiracy culture. Granted, that would be simple-minded and brutally blunt, but there seems to be a shit ton of that going around these days.
    Click image for larger version. 

Name:	Image1701034385.248279.jpg 
Views:	51 
Size:	108.3 KB 
ID:	477439

  9. #134
    Join Date
    Feb 2010
    Posts
    1,734
    Quote Originally Posted by PB View Post
    So ..... Super Intelligence decided Altman was annoying so it manipulated the Board into firing him?!? Sounds like normal human behavior to me!
    Listened to this on a long drive today. It contains an interview with Altman conducted shortly before all the shit went down. Covers a lot of different AI topics - doesn't seem to be pay walled (not for me, anyway)...

    https://www.nytimes.com/2023/11/20/p...ranscript.html

    You can also get to the podcast here...

    https://www.youtube.com/@hardfork/videos

    Episode 58. The beginning of the vid rehashes the shit for a good while, the Altman interview starts at 35:56.
    The past is a foreign country; they do things differently there.

  10. #135
    Join Date
    Nov 2003
    Location
    Portland
    Posts
    17,477
    Quote Originally Posted by fomofo View Post
    Listened to this on a long drive today. It contains an interview with Altman conducted shortly before all the shit went down. Covers a lot of different AI topics - doesn't seem to be pay walled (not for me, anyway)...

    https://www.nytimes.com/2023/11/20/p...ranscript.html

    You can also get to the podcast here...

    https://www.youtube.com/@hardfork/videos

    Episode 58. The beginning of the vid rehashes the shit for a good while, the Altman interview starts at 35:56.
    Worth a listen IMO.

  11. #136
    Join Date
    Oct 2004
    Location
    50 miles E of Paradise
    Posts
    16,893
    ^^^Worth a listen, but Altman was talking a lot without saying much. Biggest message I got was “future’s so bright, gotta wear shades”

    Quote Originally Posted by LegoSkier View Post
    Really really long winded speculation on Q* rumors by an AI professor
    https://www.forbes.com/sites/lanceel...gence-agi/amp/
    Long-winded wild speculation.

    If the new breakthrough at OpenAI is a model that can do grade school math, the technology has a long way to go before humanity is in danger.

    To me, a more clear and present danger is internet tracking software that shapes your consumption of media.

  12. #137
    Join Date
    Jan 2008
    Posts
    10,999
    Quote Originally Posted by TBS View Post
    ^^^Worth a listen, but Altman was talking a lot without saying much. Biggest message I got was “future’s so bright, gotta wear shades”



    Long-winded wild speculation.

    If the new breakthrough at OpenAI is a model that can do grade school math, the technology has a long way to go before humanity is in danger.

    To me, a more clear and present danger is internet tracking software that shapes your consumption of media.
    My non-technical understanding is that, relative to the existing LLM models, the ability of Q* to do math is more generative. It’s not search/retrieve/present, it’s “apply an understanding of a topic to form an answer” - so it’s closer to actually starting to “think”. And once you start down that road you’re much closer to AGI than previously thought. So it’s not immediate danger, but it’s potentially a massive breakthrough that accelerates towards danger.

  13. #138
    Join Date
    Sep 2006
    Posts
    8,683
    All this open AI shit got me thinking about the closed AI that's going to come up and kick us in the ass from the blindside. Is there such a thing as "closed AI"? Maybe I'm worrying for no good reason.
    "We don't beat the reaper by living longer, we beat the reaper by living well and living fully." - Randy Pausch

  14. #139
    Join Date
    Nov 2003
    Location
    Portland
    Posts
    17,477
    Quote Originally Posted by Toadman View Post
    All this open AI shit got me thinking about the closed AI that's going to come up and kick us in the ass from the blindside. Is there such a thing as "closed AI"? Maybe I'm worrying for no good reason.
    Agreed. Google, Microsoft, Apple, USA federal government, China, Russia, India, Japan, Germany, etc. must all have "closed AI" projects in some form I would think
    Damn shame, throwing away a perfectly good white boy like that

  15. #140
    Join Date
    Oct 2003
    Location
    Redwood City
    Posts
    1,811
    Quote Originally Posted by JimmyCarter View Post
    My non-technical understanding is that, relative to the existing LLM models, the ability of Q* to do math is more generative. It’s not search/retrieve/present, it’s “apply an understanding of a topic to form an answer” - so it’s closer to actually starting to “think”. And once you start down that road you’re much closer to AGI than previously thought. So it’s not immediate danger, but it’s potentially a massive breakthrough that accelerates towards danger.
    That’s my take as well. Current models basically turn all word combinations into vectors in multi dimensional space. Then when you ask it to generate new text it creates new word vectors a minimal distance from the vectors in your prompt. That’s why GPUs are useful in running models. The math is the same as calculating all the graphics polygon vectors in a video game.
    This is saying they figured out a model less technique where it is reasoning a response not based on other preexisting vectors. If true that’s a pretty big deal.

  16. #141
    Join Date
    Jan 2005
    Location
    Access to Granlibakken
    Posts
    11,863

  17. #142
    Join Date
    Mar 2005
    Location
    Yonder
    Posts
    22,527
    Quote Originally Posted by frorider View Post
    The rest of the article was paywalled.
    Interesting
    I keep bumping into chat bots and emails
    Fuck this new world order.
    Yeah. First scrape by bots. But let me punch through to a biological being.

    Then again, your post said
    We can handle GPT-4 beating 90 percent of us on the SAT,

    So I’m safe. For now.
    Kill all the telemarkers
    But they’ll put us in jail if we kill all the telemarkers
    Telemarketers! Kill the telemarketers!
    Oh we can do that. We don’t even need a reason

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •