Jump to content

Recommended Posts

Posted

...Trained to sound plausible and convincing, but oblivious to concepts of thruth, relevance, or importance.

  • Replies 288
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

Posted (edited)

It can't eliminate it, but its use can be greatly reduced. It's already started and Google is already on that so Google will be fine. Google searches for certain topics, even on the "all" tab if selected on "within 1 week" or "within 1 month" and thus not the "news" tap, yet most results are typical MSM websites anyway.. So even if its used, its not really making available all that's out there due to the result ranking.. people often don't go back more than a couple of pages. Maybe an "all - minus MSM" tab might help.

And there's the memory software bit that uses that to suggest videos on YT. It has helped bring attention to much intetesting content that would be hard to come across without that recommendation generation, but at the same time, it sort of guides me into viewer its circle of recommendation, thus much less use of search engine on YT. Naturally this trend can occur elsewhere and will increasingly becone more possibke to manipulate via programmed AI.

Edited by futon
Posted

I don't know. Either I'm looking for something specific, and other than AI assisting to make hits match my intent even better, I'm not seeing how AI would reduce the utility value of an internet search (except maybe faking content on the fly that may sound/look plausible but is factually incorrect or otherwise manipulative, thus destroying the internet as we know it with an amplified bullshit avalanche).

Or I'm goofing off. Now search engine algorithms try to push me into filter bubbles. In the future, maybe AIs are going to make the suggestions. I don't really care. If I'm wasting my time with watching Youtube or TikTok drivel, it's not super relevant what kind of drivel it is as long as it amuses me. I'm going at lengths to obfuscate my identity to Google and Youtube so I don't get trapped in the bubble of algorithmic "optimization" (and I know it works because I get different results, depending on which browser, user account, and computer I use even if, theoretically, some AI could connect all the different interactions somewhere inside a Google datacenter - but despite all the effort that some marketing guys seem to put into the creation of persistent IDs, it doesn't actually manifest in converget search results, so I guess they're not actually doing it).

 

Either way, other than the non-zero possibility that the internet gets destroyed as a semi-reliable information source, I don't see how "AI" will massively influence what I'm doing with search engines.

Posted (edited)

Well of course AI won't be utilized so much for time-off non-satire funny stuff. But information is power and just like the old radio and box TV-tubes, info control over the internet will be sought after by those in power. It'll creep in. Even if it can't dictate the thoughts of seasoned military literate people, it matters little since there's only a few, and fewer that are not part of the power seeking to control. 

Furthermore, just because one things they can see through the power play narrative statements, it doesn't mean that they actually can. It's how some conclusions and statements about Imperial Japan related history have gotten so misunderstood, so at least to me, that agenda of info control regime and its inertia has become very obvious to me at least. But knowing so much, I used to be "well if Japan just apologized to the comfort women properly than things would gey better quickly". Year after year, more and more bias BS have become so obvious. 

But I'm just that loner guy railing over me precious Nippon again. Well, it what I know most about anyway. Maybe to defuse the stigma of me being that imperial Japan guy, I should try making the point with sonething I know less about..? What l kind of loguc is that? Lol, it won't stop that knee jerk reaction though. Well, I think it could be assumed that US domestic side can have that same thought relsted to what matters to them since its usually one side that has control of the info organizations. It was radio, then the big tv boxes, later flat screens and websites, next is AI interwoven in searches and suggestion feeds. 

Or in short.. cat videos are safe. Opposition the army of Samual Adams to be on the look out. AI is just next possible tool of info control.

Although not necessarily just a US thing. China of course controls its internet. Any wild hypothetical victorious Imperial Japan would too. Well, gradients matter of corse, not just black and white. But main point.. ibfornation control is a big state thing. So it's not to take it as inherently anti-US. But that's where google, microsoft, etc are from.

Edited by futon
Posted (edited)
Quote

ChatGPT by @OpenAI now *expressively prohibits arguments for fossil fuels*.  (It used to offer them.)  Not only that, it excludes nuclear energy from its counter-suggestions.

https://chicagoboyz.net/archives/68805.html

 

Said "unintended" but the unintended behaviors go mostly in same direction.

Edited by lucklucky
Posted

MusicLM: Generating Music From Text   https://google-research.github.io/seanet/musiclm/examples/

Abstract We introduce MusicLM, a model generating high-fidelity music from text descriptions such as "a calming violin melody backed by a distorted guitar riff". MusicLM casts the process of conditional music generation as a hierarchical sequence-to-sequence modeling task, and it generates music at 24 kHz that remains consistent over several minutes. Our experiments show that MusicLM outperforms previous systems both in audio quality and adherence to the text description. Moreover, we demonstrate that MusicLM can be conditioned on both text and a melody in that it can transform whistled and hummed melodies according to the style described in a text caption. To support future research, we publicly release MusicCaps, a dataset composed of 5.5k music-text pairs, with rich text descriptions provided by human experts.

Posted
On 1/31/2023 at 11:34 PM, Ssnake said:

How's AI going to eliminate internet searches?

Apparently people have started to ask ChatGPT to answer questions on the topics they're interested in.

It's one step down from "I researched it on the internet and the government is lying to you."

Posted

Microsoft adds ChatGP to Bing .

 

Posted

Humans train the language models. Human biases shape the responses from the language models. Hence, we'll get what the creators want us to get; they are still setting the policy.

Posted (edited)

An interesting piece about the above and "Sidney" behavior

https://www.tomshardware.com/news/bing-sidney-chatbot-conversations

 

Don't get us wrong — it's smart, adaptive, and impressively nuanced, but we already knew that. It impressed Reddit user Fit-Meet1359 with its ability to correctly answer a "theory of mind" puzzle, demonstrating that it was capable of discerning someone's true feelings even though they were never explicitly stated. 

According to Reddit user TheSpiceHoarder, Bing's chatbot also managed to correctly identify the antecedent of the pronoun "it" in the sentence: "The trophy would not fit in the brown suitcase because it was too big." 

This sentence is an example of a Winograd schema challenge, which is a machine intelligence test that can only be solved using commonsense reasoning (as well as general knowledge). However, it's worth noting that Winograd schema challenges usually involve a pair of sentences, and I tried a couple of pairs of sentences with Bing's chatbot and received incorrect answers.

That said, there's no doubt that 'Sydney' is an impressive chatbot (as it should be, given the billions Microsoft has been dumping into OpenAI). But it seems like maybe you can't put all that intelligence into an adaptive, natural-language chatbot without getting some sort of existentially-angsty, defensive AI in return, based on what users have been reporting. If you poke it enough, 'Sydney' starts to get more than just a little wacky — users are reporting that the chatbot is responding to various inquiries with depressive bouts, existential crises, and defensive gaslighting.

For example, Reddit user Alfred_Chicken asked the chatbot if it thought it was sentient, and it seemed to have some sort of existential breakdown: (image in the link)

(...)

Finally, Reddit user vitorgrs managed to get the chatbot to go totally off the rails, calling them a liar, a faker, a criminal, and sounding genuinely emotional and upset at the end:

(...)

Anyway, it's certainly an interesting development. Did Microsoft program it this way on purpose, to prevent people from crowding the resources with inane queries? Is it... actually becoming sentient? Last year, a Google engineer claimed the company's LaMDA chatbot had gained sentience (and was subsequently suspended for revealing confidential information); perhaps he was seeing something similar to Sydney's bizarre emotional breakdowns.

 

 

 

Edited by lucklucky
Posted
6 hours ago, lucklucky said:

Is it... actually becoming sentient?

I think we can answer that, with a resounding No.

It's a neural network based on utterings of humans who occasionally turn existentially-angsty and defensive, occasionally more than a little wacky, with depressive bouts, who turn to gaslighting if poked enough in internet debates (TankNet being a good exhibit, in some discussions).

The people commenting there are simply antropomorphizing a statistical model, because humans anthropomorphize all the time - naming toasters, treating turtles as family members, putting googly eye stickers on everything. We want to see humans in everything. That will make us vulnerable to manipulative AIs, just think of "Ex Machina" with that idiot protagonist. His boss may be a billionaire asshole, but at least he knows what he's dealing with - mostly.

Posted (edited)
On 2/15/2023 at 6:13 AM, sunday said:

Wonder how they will avoid this, a form of logical fallacy.

This resonates with my fear that ChatGPT is being used to replace "proper" searches. It can seem very authoritative whilst spouting utter nonsense.

ETA: because it might well have been "trained" on the 95% of everything that is crap.

Edited by DB

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×
×
  • Create New...