Jump to content

rmgill

Recommended Posts

  • Replies 917
  • Created
  • Last Reply

Top Posters In This Topic

 

How can any of you demand the right to free speech, unless you have the ability to use truth as a compass? Because all you are asking for if you have the former in exclusion, is the right to be wrong, or the right to repeat lies. Which is virtually worthless. You may as well demand the right to say quirkafleeg all day for all the benefit it will bring you.

Whose truth, your truth? You keep railing on about this but you have yet again failed to define just who is the final arbiter of truth.

Link to comment
Share on other sites

 

 

How can any of you demand the right to free speech, unless you have the ability to use truth as a compass? Because all you are asking for if you have the former in exclusion, is the right to be wrong, or the right to repeat lies. Which is virtually worthless. You may as well demand the right to say quirkafleeg all day for all the benefit it will bring you.

Whose truth, your truth? You keep railing on about this but you have yet again failed to define just who is the final arbiter of truth.

 

 

If I say 'The Sky is green' and I keep spamming it and spamming it, you might infer one of two things.

1 I was nuts.

2 Or I had an agenda to sell.

 

There are SOME events you know are perfectly not true. Right? You wake up and just know the Sky is not green without even looking at it. Fact. You know Trump is not spying for Russia. Fact. We know that Kennedy was not assassinated by his Chauffeur. Fact. Conversely, there are fact we DO know, even if we find it hard to acknowledge them. We know Hitler killed Jews. Fact. We live on a round Sphere that spins around the sun. We know we are in a trade war with China. Fact. We know that the Russians attacked Sergei Skripal in Salisbury. Fact. So just because some events are up for debate, some events decidedly are not. And yet people STILL debate them as if there was some subject for debate. Sometimes its because they are willfully obtuse, or more often, because they are put up to it by negative actors, state or otherwise. They are fed lies so often they internalize them and believe them. Look up the flat earth society and you will see people will believe the damndest things, if they are put up to it.

 

 

In the end its common sense that infers what is truth. We all have common sense, so the final decision of what is truth is up to you and me. Thats all. And usually common sense, absent of coaching, is usually right. Not always, but usually.

 

 

If you arent stood on the gate with a verbal gun to ward off the lies, you will have to expect the state, or someone, will eventually do it for you. Yes, I can understand, and entirely endorse, debate about facts we do not know. There are events that are unknowable But when we have continual running gun battles over events that are well known and are in fact bleeding obvious, its self evident there is a problem. One that nobody here, or anywhere else, seems able to address.

 

Im not a promoter of censorship, other than suggesting either censorship, or anarchy, is what you are going to get if we continue down the road we are treading without diversion. The EU is already doing it. Imagine the result THAT heavy handedness is going to have on debate.

 

Ill leave it there. If people havent got what im trying to say after 6 times saying it, I dont believe its going to stick. Wait until the next election, then you might get it. Or just roll by the Ukraine and Salisbury thread. The future is already here.

Edited by Stuart Galbraith
Link to comment
Share on other sites

 

 

 

How can any of you demand the right to free speech, unless you have the ability to use truth as a compass? Because all you are asking for if you have the former in exclusion, is the right to be wrong, or the right to repeat lies. Which is virtually worthless. You may as well demand the right to say quirkafleeg all day for all the benefit it will bring you.

Whose truth, your truth? You keep railing on about this but you have yet again failed to define just who is the final arbiter of truth.

 

 

If I say 'The Sky is green' and I keep spamming it and spamming it, you might infer one of two things.

1 I was nuts.

2 Or I had an agenda to sell.

 

You wrote a cute little essay, but again, failed to answer the question. Who is the final arbiter of truth?

 

Nobody gets what you're saying because out one side of your mouth you're calling for some sort of truthful censorship, out the other side of your mouth you're saying if we don't have that censorship, the government will step in with its own censorship. In the second instance I think we can all agree that it is the government which, in the absence of free speech, can declare itself the final arbiter of truth. I'm interested in the first instance, out the first side of your mouth, exactly who is it that you believe should be the final arbiter of truth?

Edited by DKTanker
Link to comment
Share on other sites

It seems similar conversations are making their rounds. Ran into this in a different social circle:

https://onezero.medium.com/the-dark-forest-theory-of-the-internet-7dc3e68a7cb1

 

It’s possible, I suppose, that a shift away from the mainstream internet and into the dark forests could permanently limit the mainstream’s influence. It could delegitimize it. In some ways that’s the story of the internet’s effect on broadcast television. But we forget how powerful television still is. And those of us building dark forests risk underestimating how powerful the mainstream channels will continue to be, and how minor our havens are compared to their immensity.


The article's a good read, even though what the author calls "dark forest" spaces were the rule before the web existed, and they have been a thriving niche since. This Grate Sight is a prime example of such a space.
Link to comment
Share on other sites

 

 

Look how the champions for freedom of speech let their voices be heard for freedom of speech champion Liu Xiaobo!

 

http://www.tank-net.com/forums/index.php?showtopic=42580&hl=xiaobo

Are single individuals going to change China when masses of students were murdered by soldiers? I think not. I think not.

One man, if you spent even a little time knowing about him, knowing about Tianamein Square Massecre, a little about China at all, you would have surely come across Charter 08.

https://en.m.wikipedia.org/wiki/Charter_08

 

You fail even more now in some ego based desparate attempt to find a "you're wrong" point even if throwing Liu Xiaobo under the bus.

 

I'm not throwing him under the bus. I just know that my opinion of him and anything I might say will have ZERO effect on any policies that the Chinese government has. I could consider him the next coming of the Great Prophet Zarquon and it still wouldn't matter a sinle pair of fetid dingo's kidneys.

 

Do you want me to put a "free tibet" sticker on my car and go to some more meetings where the Dalai Lama talks about his multi-decade struggle?

 

What with Sesame Credit going live, I pretty much consider that China is going to have to implode OR get into a major shooting war.

 

Edited by rmgill
Link to comment
Share on other sites

 

 

 

I'm not throwing him under the bus. I just know that my opinion of him and anything I might say will have ZERO effect on any policies that the Chinese government has. I could consider him the next coming of the Great Prophet Zarquon and it still wouldn't matter a sinle pair of fetid dingo's kidneys.

 

Do you want me to put a "free tibet" sticker on my car and go to some more meetings where the Dalai Lama talks about his multi-decade struggle?

 

What with Sesame Credit going live, I pretty much consider that China is going to have to implode OR get into a major shooting war.

 

 

 

Well of course it would have been a very tall order. Even Liu Xiaobo said in one of his interviews that even though he pursues democracy in China, he figured it was next to impossible, paraphrased as "not possible for like 300 years" or something like that.

 

No Tibet decals on the car necessary. In the context of 2000-2012ish, just a general keeping tabs on the new raising power and demonstrating a little interest and conscious raising by talking about it once in awhile. Afterall it was thought that if China was to get a new middle class that democracy would come about so at least a little tracking on those events. I would say that the complete failure of Charter 08 shortly followed by Liu Xiaobo's imprisonment should have raised the alarm in the US population and make many people rethink about buying "made in China". I know I have. That was even before I moved to Japan, I found it odd how the shelves at wal-mart were stuffed with "Made in China".. a huge country with enormous potential and very undemocratic. Totally different than if "made in Mexico, made in Vietnam, made in Bangladesh, or wherever the undemocratic place might be because none of those other places have the potential to rival the US. So if a US population that took their freedoms seriously in regards to China, I would have expected them to take the initiative themselves and start buying less. Not necessarily quit all together, but at least keep it at the 150-200 billion USD import level. Instead it kept ballooning to the current 500 billion USD. Well of course Americans were distracted by other things... the Iraq and Afghanistan War for starters, But still, I think a possibility to act, even as small as not buying made in China once in a while, from a result of a raised conscious by around 2010 shouldn't be such a tall order, particularly for individuals that hold freedom of speech and the values of democracy dearly. But anyway that phase is over. At least now it looks like the US is starting to get serious with China. But things need to be well coordinated. Wouldn't be fair to a US standing up to China and if things evolve into a FFA but as of now, it looks like the pact is sticking together. Now is just a matter of when QE is finally going to get here join the rest of the pact for a picture shooting :)

 

Yeah Sesame Credit, big gov governs your behavior. I think a more subtle approach can be possible to keep China bottled up and just let them sit there with their "socialism with Chinese characteristics" but such a subtle approach would have to last way into the long term.

Edited by JasonJ
Link to comment
Share on other sites

Do you want me to put a "free tibet" sticker on my car and go to some more meetings where the Dalai Lama talks about his multi-decade struggle?

 

I think the situation to be avoided is being clubbed over the head with various labels intended to silence those who don't want to do so.

Edited by Nobu
Link to comment
Share on other sites

 

 

 

 

How can any of you demand the right to free speech, unless you have the ability to use truth as a compass? Because all you are asking for if you have the former in exclusion, is the right to be wrong, or the right to repeat lies. Which is virtually worthless. You may as well demand the right to say quirkafleeg all day for all the benefit it will bring you.

Whose truth, your truth? You keep railing on about this but you have yet again failed to define just who is the final arbiter of truth.

 

 

If I say 'The Sky is green' and I keep spamming it and spamming it, you might infer one of two things.

1 I was nuts.

2 Or I had an agenda to sell.

 

You wrote a cute little essay, but again, failed to answer the question. Who is the final arbiter of truth?

 

Nobody gets what you're saying because out one side of your mouth you're calling for some sort of truthful censorship, out the other side of your mouth you're saying if we don't have that censorship, the government will step in with its own censorship. In the second instance I think we can all agree that it is the government which, in the absence of free speech, can declare itself the final arbiter of truth. I'm interested in the first instance, out the first side of your mouth, exactly who is it that you believe should be the final arbiter of truth?

 

You know DK, this is why Im wary of getting into debates with you. Im convinced you dont read a damn thing I write. Its right there, on the 3rd bloody paragraph. I would have underlined it but I didnt want to insult your intelligence.

 

Jesus. Why do I bother?

Edited by Stuart Galbraith
Link to comment
Share on other sites

 

You know DK, this is why Im wary of getting into debates with you. Im convinced you dont read a damn thing I write. Its right there, on the 3rd bloody paragraph. I would have underlined it but I didnt want to insult your intelligence.

 

 

Jesus. Why do I bother?

 

 

 

In the end its common sense that infers what is truth. We all have common sense, so the final decision of what is truth is up to you and me. Thats all. And usually common sense, absent of coaching, is usually right. Not always, but usually.

Whose common sense? Who between you and me is the final arbiter of truth? Which one of us gets to censor the other, you know, so that Big Brother Government doesn't censor one or both of us?

 

Maybe your problem isn't with me, but with yourself. Perhaps you should give serious examination to the question posed instead of hand waving.

Edited by DKTanker
Link to comment
Share on other sites

Here is the nature of the problem. And this is JUST the automated accounts. It takes no account of paid trolls.

 

https://www.thejakartapost.com/life/2019/05/25/fake-facebook-accounts-never-ending-battle-against-bots.html

The staggering figure of more than three billion fake accounts blocked by Facebook over a six-month period highlights the challenges faced by social networks in curbing automated accounts, or bots, and other nefarious efforts to manipulate the platforms.

Here are four key questions on fake accounts:

How did so many fake accounts crop up?

Facebook said this week it "disabled" 1.2 billion fake accounts in the last three months of 2018 and 2.19 billion in the first quarter of 2019.

Most fake social media accounts are "bots," created by automated programs to post certain kinds of information -- a violation of Facebook's terms of service and part of an effort to manipulate social conversations. Sophisticated actors can create millions of accounts using the same program.

Facebook said its artificial intelligence detects most of these efforts and disables the accounts before they can post on the platform. Still, it acknowledges that around five percent of the more than two billion active Facebook accounts are probably fake.

What's the harm from fake accounts?

Fake accounts may be used to amplify the popularity or dislike of a person or movement, thus distorting users' views of true public sentiment.

Bots played a disproportionate role in spreading misinformation on social media ahead of the 2016 US election, according to researchers. Malicious actors have been using these kinds of fake accounts to sow distrust and social division in many parts of the world, in some cases fomenting violence against groups or individuals.

Bots "don't just manipulate the conversation, they build groups and bridge groups," said Carnegie Mellon University computer scientist Kathleen Carley, who has researched social media bots.

"They can make people in one group believe they think the same thing as people in another group, and in doing so they build echo chambers."

Facebook says its artificial intelligence tools can identify and block fake accounts as they are being created -- and thus before they can post misinformation.

"These systems use a combination of signals such as patterns of using suspicious email addresses, suspicious actions, or other signals previously associated with other fake accounts we've removed," said Facebook analytics vice president Alex Schultz in a blog post.

 

Does Facebook have the control of the situation?

The figures from Facebook's transparency report suggests Facebook is acting aggressively on fake accounts, said Onur Varol, a postdoctoral researcher at the Center for Complex Network Research at Northeastern University.

"Three billion is a big number -- it shows they don't want to miss any fake accounts. But they are willing to take a risk" of disabling some legitimate accounts, Varol said.

Legitimate users may be inconvenienced, but can generally get their accounts reinstated, the researcher noted.

"My feeling is that Facebook is making serious efforts" to combat fake accounts, he added.

But new bots are becoming more sophisticated and harder to detect, because they can use language nearly as well as humans, according to Carley.

"Facebook may have solved yesterday's battle but the nature of these things is changing so rapidly they may not be getting the new ones," she said.

Varol agreed, noting that "there are bots that understand natural language and can respond to people, and that's why it's important to keep research going."

 

How do you deal with a problem like that without some kind of censorship? And quite clearly, and unfortunately, you dont. I dont want it. Illustrate, with this level of threat, what alternatives are there?

Link to comment
Share on other sites

Stuart, how CAN YOU censor an avalanche of bots that are harder and harder to distinguish from humans?

I agree that this is a problem, but at the same time I don't see censorship as the answer. It's like using a spiked baseball bat for brain surgery.

Link to comment
Share on other sites

Stuart, how CAN YOU censor an avalanche of bots that are harder and harder to distinguish from humans?

I agree that this is a problem, but at the same time I don't see censorship as the answer. It's like using a spiked baseball bat for brain surgery.

 

Automation. I can only see an army of bots design to detect bots and delete them automatically. And that is going to cause a whole host of civil liberties concerns in itself, but you have to think on 20 years from now when Bots are more intelligent than they currently are, and become indistinquishable from humans. What do we do then? Its going to make discourse on the internet damn near impossible.

 

I sometimes reflect on the Turing test and what implications that has for the future.

 

https://en.wikipedia.org/wiki/Turing_test

Predictions

Turing predicted that machines would eventually be able to pass the test; in fact, he estimated that by the year 2000, machines with around 100 MB of storage would be able to fool 30% of human judges in a five-minute test, and that people would no longer consider the phrase "thinking machine" contradictory.[4] (In practice, from 2009–2012, the Loebner Prize chatterbot contestants only managed to fool a judge once,[95] and that was only due to the human contestant pretending to be a chatbot.[96]) He further predicted that machine learning would be an important part of building powerful machines, a claim considered plausible by contemporary researchers in artificial intelligence.[69]

In a 2008 paper submitted to 19th Midwest Artificial Intelligence and Cognitive Science Conference, Dr. Shane T. Mueller predicted a modified Turing test called a "Cognitive Decathlon" could be accomplished within five years.[97]

By extrapolating an exponential growth of technology over several decades, futurist Ray Kurzweil predicted that Turing test-capable computers would be manufactured in the near future. In 1990, he set the year around 2020.[98] By 2005, he had revised his estimate to 2029.[99]

The Long Bet Project Bet Nr. 1 is a wager of $20,000 between Mitch Kapor (pessimist) and Ray Kurzweil (optimist) about whether a computer will pass a lengthy Turing test by the year 2029. During the Long Now Turing Test, each of three Turing test judges will conduct online interviews of each of the four Turing test candidates (i.e., the computer and the three Turing test human foils) for two hours each for a total of eight hours of interviews. The bet specifies the conditions in some detail.[100]

 

At that point, we are going to find it damn near impossible to communicate with each other, except down a phone line or in person.

Edited by Stuart Galbraith
Link to comment
Share on other sites

 

Stuart, how CAN YOU censor an avalanche of bots that are harder and harder to distinguish from humans?

I agree that this is a problem, but at the same time I don't see censorship as the answer. It's like using a spiked baseball bat for brain surgery.

 

Automation. I can only see an army of bots design to detect bots and delete them automatically.

 

Well, per your own report Facebook does exactly that, deleting bot accounts by the billions. But that very same article also points out that it may very well be a rearguard action as the bots are becoming better and better. So maybe more automation is the answer (I remain skeptical about the rate of false positives), but at some point I could very well imagine that we have to give up the concept of social media as such, as an uncontrollable horde of bots begins to dominate the opinion space.

 

Bots can be successful in influencing public opinion only because of the attention they receive. For example, pretty much nobody in Germany cares what happens on Twitter. And there's but a single party that can do Facebook (the Putin Puppets Party (AfD)), thanks to millions of fake accounts to amplify their regressive messages. So, okay, maybe that's good for three to six percent of the votes that they get - nothing to sneeze at, but at the same time nothing to be overly concerned about. The older I get, the more I calm down about these issues. Maybe I shouldn't. But if it weren't the AfD, it would be some equally stupid other party. I just don't see it yet that a large portion of the voters can be influenced by Russian troll farms for an extended period. They had their successes but I also think that the digital sphere overestimates the importance of Facebook because of the digital affinity of its members.

Edited by Ssnake
Link to comment
Share on other sites

The more dangerous bots are the morons who blindly share stuff about everything on social media despite being proven wrong repeatedly by their friends/followers. Fighting bots is noble and all, but it doesn't mitigate the fact that people online are total idiots and need no help in doing stupid things.

Link to comment
Share on other sites

The more dangerous bots are the morons who blindly share stuff about everything on social media despite being proven wrong repeatedly by their friends/followers. Fighting bots is noble and all, but it doesn't mitigate the fact that people online are total idiots and need no help in doing stupid things.

 

FIFY

 

Unfortunately, current social media filtering has a large chunk of the population convinced that they know what's going on, etc. Social media has a multiplier effect.

 

When I read some of the idiocy my black friends and acquaintances have been posting about "Dr. Sebi" and Nipsey Hustle, I despair. Even university-educated people propagate stuff implying that "Dr. Sebi" had the cure for cancer, etc. but Big Pharma assassinated him.

Link to comment
Share on other sites

I just don't see it yet that a large portion of the voters can be influenced by Russian troll farms for an extended period.

 

I think this resistance to troll farm propaganda, Russian or otherwise, is a positive indicator of the health of a society in various ways. What disturbs me is when nations with similar democratic advantages do not exhibit similar resistance and end up being weakened by it.

 

Another reason why Japan's similar stance to Germany's regarding the non-status of Scientology as a religion is encouraging in various ways.

Edited by Nobu
Link to comment
Share on other sites

Fake accounts may be used to amplify the popularity or dislike of a person or movement, thus distorting users' views of true public sentiment.

On the other hand, it is creating public sentiment. When 100% of public sentiment is derived from media narratives (whether MSM or "alternative" channels), what does "true public sentiment" mean? When distortion is all that is there, is the distinction meaningful?

 

 

How do you deal with a problem like that without some kind of censorship?

To play devil's advocate a bit, I suppose you could counter it with your own bot army which pushes fact-checking and counterarguments in all of the same communication channels in which these "bad" bots are operating. The notion is repugnant, but if I had to choose, I'd say benevolent bot-propaganda might be preferable to censorship.

 

The same criticisms apply to both censorship and counter-propaganda, though -- who do you trust to push a truly benign narrative? Everyone has their biases and agendas. Government is one of the least trustworthy actors, and Facebook and Google are working hard to distinguish themselves as worse.

Edited by TTK Ciar
Link to comment
Share on other sites

The man may have given his dogs illegal names, but at least he didn't teach them to give the Nazi salute

 

Man detained in China for giving dogs 'illegal' names

 

A man has been detained in eastern China for giving his dogs "illegal" names, it's reported.
According to popular newspaper Beijing News, a dog breeder in his early 30s surnamed Ban in eastern Anhui Province was summoned by the police on Monday, after posting on mobile messenger WeChat that he had two new dogs, named Chengguan and Xieguan.
The names attracted controversy because they refer to government and civil service workers respectively.
"Chengguan" are officials employed in urban areas to tackle low-level crime, and "Xieguan" are informal community workers such as traffic assistants.
The paper says that Mr Ban gave the dogs the names "for fun", but the authorities have failed to see the funny side - particularly with regards the former.
The Yingzhou Police said that they had immediately launched an investigation into the man, who they say had issued "insulting information… against law enforcement personnel".
They added that, "in accordance with the relevant provisions of the People's Republic of China Law on Public Security", he must spend 10 days in an administrative detention centre in the city of Xiangyang.
A police officer surnamed Li told Beijing News that Mr Ban had been increasingly provocative on his WeChat account, and says that his actions had "caused great harm to the nation and the city's urban management, in terms of their feelings".
Mr Ban says he regrets his actions, with Beijing News quoting him as saying "I didn't know the law; I didn't know this was illegal."While some are of the view that he was "looking for trouble", his arrest has been met with much shock on the popular Sina Weibo microblog.
Many users voice concern about the conditions under which Mr Ban was detained. "Can you tell me which law stipulates that dogs can't be called Chengguan?" one user asks. Another asks: "What other words could you be imprisoned for?"
Some users have been joking that Mr Ban was detained for "suspected subversion of state power" or "revealing state secrets", implying that the chengguan are, actually, dogs.
Reporting by Kerry Allen

​

​
Link to comment
Share on other sites

The machine learning isn't doing so hot. Do a report on youtube, mention a group that is 'bad' and your report is de-ranked/hidden.

If you're a major news network, your videos are not de-ranked/hidden. Tim Pool has reported on this repeatedly.

Link to comment
Share on other sites

IF we assume AI capability that can convincingly pretend to be human, I guess we can assume there will arise AI that will be able to recognise, at least some of the time, fellow bots. But yes, already we are beginning to sink in the tide of misinformation. This site has arguably already been damaged by organizations that have identified it (wrongly) as an unsafe site. So looked at by that view, the enemies of freedom are already winning arent they?

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×
×
  • Create New...