Video on ALADDIN…
Source
***
MUST -WATCH…
Max Igan (via Odysee & BitChute)…
Source
Source
* * *
PayPal: Donate in USD
PayPal: Donate in EUR
PayPal: Donate in GBP
The man who trades freedom for security does not deserve nor will he ever receive either. – Benjamin Franklin
***
* * *
PayPal: Donate in USD
PayPal: Donate in EUR
PayPal: Donate in GBP
https://youtu.be/NXnHDM5-nCY
* * *
Please support I. U.
PayPal: Donate in USD
PayPal: Donate in EUR
PayPal: Donate in GBP
https://youtu.be/QMy2LE8hrBM
* * *
PayPal: Donate in USD
PayPal: Donate in EUR
PayPal: Donate in GBP
Watch what is going on in China (and not just China)…
5G & total surveillance coming to a country near you.
Don’t miss.
* * *
PayPal: Donate in USD
PayPal: Donate in EUR
PayPal: Donate in GBP
– AI Researchers Boycott South Korean University Over Plan To Build “Killer Robots”:
It looks like Tesla CEO Elon Musk and Russian President Vladimir Putin aren’t the only ones who’ve envisioned a nightmare scenario where “killer robots” stalk through neighborhoods murdering innocent Americans (or Russians).
A group of artificial intelligence researchers from nearly 30 countries is boycotting one of South Korea’s most prestigious universities over concerns about a recent partnership with an “ethically dubious” arms manufacturer with the stated purpose to design and manufacture “autonomous weapons systems”.
The Korea Advanced Institute of Science and Technology (KAIST) and its partner, the weapons manufacturer Hanwha Systems, one of South Korea’s largest arms dealers, are pushing back against the boycott, saying they have no intention of developing “killer robots” – even though the description of the project clearly states its goals, per the Guardian.
“There are plenty of great things you can do with AI that save lives, including in a military context, but to openly declare the goal is to develop autonomous weapons and have a partner like this sparks huge concern,” said Toby Walsh, the organiser of the boycott and a professor at the University of New South Wales.
“This is a very respected university partnering with a very ethically dubious partner that continues to violate international norms.”
What’s worse, the scientists say, is Hanwha’s history of manufacturing and selling cluster munitions and other arms that are banned in more than 120 countries under an international treaty that South Korea, the US, Russia and China have not signed.
Read moreA.I. Researchers Boycott South Korean University Over Plan To Build “Killer Robots” – #AI
– China Cracks Down On Jaywalkers With AI, Facial Recognition, & Automated Fines:
As we pointed out earlier this week, China’s lack of data protection laws and its determination to overtake the US as the world-leader in AI technology poses a serious threat to US technological hegemony. As Russian President Vladimir Putin once said, whoever dominates the AI race could one day rule the world.
Well, another advantage that China has in its AI push is its reputation for strict surveillance and law enforcement – which provides for plenty of use-cases where China can test its nascent technology. Case in point: Police in Shenzen are using AI and facial recognition software to install “smart” traffic cameras that can identify and fine Chinese citizens who jaywalk – a crime that is the subject of strict enforcement in China, per the South China Morning Post.
Read moreChina Cracks Down On Jaywalkers With AI, Facial Recognition, & Automated Fines
– Stephen Hawking’s Final Warnings to Humanity:
World-renowned theoretical physicist Stephen Hawking passed away Tuesday, leaving behind a legacy of innovation when it comes to understanding black holes, time and space, and the universe in general.
In recent years, Hawking, who suffered from Lou Gehrig’s disease, a neurodegenerative disorder, was outspoken on a variety of issues, often addressing societal, environmental, and existential dilemmas plaguing humanity.
In 2016, he speculated that alien life exists but warned humanity to be cautious about pursuing relations with it, comparing extraterrestrials’ intentions to some of the worst exploitations humanity has inflicted on itself.
On multiple occasions, he echoed the sentiment that when the Native Americans first encountered Christopher Columbus, it “didn’t turn out so well.”
Hawking was also deeply skeptical of artificial intelligence.
“Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst. We just don’t know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it,” he said last year.
“Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy.”
– This A.I. literally reads your mind to re-create images of the faces you see:
Google’s artificial intelligence technology may sometimes seem like it’s reading our mind, but neuroscientists at Canada’s University of Toronto Scarborough are literally using A.I. for that very purpose — by reconstructing images based on brain perception using data gathered by electroencephalography (EEG).
In a test, subjects were hooked up to EEG brainwave-reading equipment and shown images of faces. While this happened, their brain activity was recorded and then analyzed using machine learning algorithms. Impressively, the researchers were able to use this information to digitally re-create the face image stored in the person’s mind. Unlike basic shapes, being able to re-create faces involves a high level of fine-grained visual detail, showcasing a high level of sophistication for the technology.
Read moreThis A.I. literally reads your mind to re-create images of the faces you see
– Former Google Exec: Don’t Worry About Terminator Robots for Another Decade or Two
Via: Defense News:
Rapid advances in artificial intelligence and military robotics have some concerned that the development of Terminator-like killer robots will be humankind’s downfall. But that doesn’t seem to worry Eric Schmidt, the former executive chairman of Google parent company Alphabet, who addressed the impact of technology on democracy at the Feb. 16-18 Munich Security Conference.
“Everyone immediately then wants to talk about all the movie-inspired death scenarios, and I can confidently predict to you that they are one to two decades away. So let’s worry about them, but let’s worry about them in a while,“ Schmidt said.
…
H/t reader Squodgy:
“Heartwarming.
Why not just stop there and use them for peaceful purposes?”
* * *
PayPal: Donate in USD
PayPal: Donate in EUR
PayPal: Donate in GBP
– How Long Before Artificial Intelligence Makes Humans Redundant?:
With all of the recent advances in artificial intelligence, are you starting to get worried? You really have to wonder how long it will be before human beings become redundant.
Maybe you should be concerned. In many cases, robots can easily replace humans in the manufacturing industry, the medical system, and even food service. Some theories suggest that offering universal basic income is the first step toward ushering in a world in which robots have all the jobs and humans live off the goodness of the government…for as long as that lasts. (Check out this documentary for more information.)
But losing job opportunities isn’t the only reason for concern. Not only is today’s AI extremely advanced, but it also has the capability to learn. Recently, many people were alarmed when an AI called Alpha Zero learned how to play chess in 4 hours, then beat world champion human chess players using moves never seen before.
Read moreHow Long Before Artificial Intelligence Makes Humans Redundant? – #AI
We live in a society that is obsessed with oversharing. While it’s becoming increasingly difficult to tune out the trivial bits of people’s lives that we don’t care about, we can still choose not to sign up for social media accounts to preserve our own privacy. We can also take comfort from the idea that our private thoughts will always remain our own – at least for the time being. You might want to enjoy that last bit of privacy while you still can because a creepy new AI looks set to change that very soon.
Japanese researchers have now developed an AI machine that can take a look into your mind with an uncanny degree of accuracy. It studies the electrical signals within your brain to determine the images you are looking at or even just thinking about, and then it can create images of it that are startlingly reliable.
The project is being carried out at Kyoto University under the leadership of Professor Yukiyasu Kamitani. The researchers are creating the images using a neural network and information culled from fMRI scans that detect the changes in people’s blood flow in order to analyze electrical activity. This data enabled their machine to reconstruct images such as red mailboxes, stained glass windows, and owls after volunteers stared at pictures of these items. In addition, it was able to create pictures of objects the participants were simply imagining, including goldfish, bowling balls, leopards, crosses, squares and swans with varying degrees of accuracy.
…
* * *
PayPal: Donate in USD
PayPal: Donate in EUR
PayPal: Donate in GBP
https://youtu.be/IIInS1dXMRk
* * *
PayPal: Donate in USD
PayPal: Donate in EUR
PayPal: Donate in GBP
https://twitter.com/RT_com/status/934983258949013504
– 2017: The year AI took over (VIDEOS):
Tech billionaire Elon Musk and renowned theoretical physicist Stephen Hawking led the charge, warning that robots could one day wipe out humanity. “AI is a fundamental risk to the existence of human civilization,” Musk said earlier this year, while Hawking added “I fear that AI may replace humans altogether.”
…
Meanwhile, the campaign to ‘Stop the Killer Robots’ upped the ante this year as hundreds of experts in the field of artificial intelligence (AI) and robotics sent letters to world leaders, urging them to support a ban on autonomous weapons.
Nearly all countries accepted that some form of human control must be maintained over weapons systems during meetings of the United Nations’ Convention on Conventional Weapons in November.
* * *
PayPal: Donate in USD
PayPal: Donate in EUR
PayPal: Donate in GBP
– New AI That Makes Fake Videos May Be the End of Reality as We Know It:
A new artificial intelligence (AI) algorithm is capable of manufacturing simulated video imagery that is indiscernible from reality, say researchers at Nvidia, a California-based tech company. AI developers at the company have released details of a new project that allows its AI to generate fake videos using only minimal raw input data. The technology can render a flawlessly realistic sequence showing what a sunny street looks like when it’s raining, for example, as well as what a cat or dog looks like as a different breed or even a person’s face with a different facial expression. And this is video — not photo.
For their work, researchers tweaked a familiar algorithm, known as a generative adversarial network (GAN), to allow their AI to create fresh visual data. The technique involves playing two neural networks against each other, but Nvidia’s new program requires far less input and no labeled datasets. In other words, AI is getting much, much better at mimicking reality.
Read moreNew AI That Makes Fake Videos May Be the End of Reality as We Know It
This is CIA MK-ULTRA level stuff but hey what could go wrong with the US military’s research division DARPA controlling your emotions?
Human Testing Begins: Brain Implants To ‘Change Moods Controlled By AI’https://t.co/xzmHEUmgSs
— Luke Rudkowski (@Lukewearechange) December 8, 2017
– Brain implants to ‘change moods controlled by AI’ begin HUMAN TESTS:
SCIENTISTS have begun human testing on electronic brain implants designed to change peoples moods controlled by computers.
This will then change people’s moods and is believed to be able to treat mental illness and provide therapy.
Artificial intelligence in implants will detect and study the brain to know what pulses to send – described by scientists as a “window on the brain”.
DeepMind Technologies Limited, acquired by Google in 2014, is a British artificial intelligence company founded in September 2010.
Now the era of computer chess engine programming also seems to be over: AlphaZero, developed by @DeepMindAI & @demishassabis, took just 4 hours playing against itself to learn to play better than Stockfish (it won 64:36)! Replay 10 example games: https://t.co/cBEuoEFMTN #c24live pic.twitter.com/U2bn1KyJbL
— chess24 (@chess24com) December 6, 2017
And what could possibly go wrong?
FYI.
– Surveillance State: Stanford Researchers Use AI To Determine Neighborhood’s Bias By Its Cars:
A team of researchers at Stanford University have trained artificial intelligence algorithms to observe and study millions of images on Google Street View to determine how people vote by the make of their car. The algorithms were trained to recognize the make, model, and year of every car produced since 1990, in more than 50 million Google Street View images across 200 American cities.
The data on car types and location were then compared against the most comprehensive demographic database in use today, the American Community Survey, and against presidential election voting data to estimate demographic factors such as race, education, income and voter preferences, the Stanford News reported.
https://www.youtube.com/watch?v=OcXW8dYk9yA
The video can be watched also here:
https://www.bitchute.com/video/OcXW8dYk9yA/
H/t reader squodgy.
* * *
PayPal: Donate in USD
PayPal: Donate in EUR
PayPal: Donate in GBP
– Facebook Announces It Will Use A.I. To Scan Your Thoughts “To Enhance User Safety”:
A mere few years ago the idea that artificial intelligence (AI) might be used to analyze and report to law enforcement aberrant human behavior on social media and other online platforms was merely the far out premise of dystopian movies such as Minority Report, but now Facebook proudly brags that it will use AI to “save lives” based on behavior and thought pattern recognition.
What could go wrong?
The latest puff piece in Tech Crunch profiling the apparently innocuous sounding “roll out” of AI (as if a mere modest software update) “to detect suicidal posts before they’re reported” opens with the glowingly optimistic line, “This is software to save lives” – so who could possibly doubt such a wonderful and benign initiative which involves AI evaluating people’s mental health? Tech Crunch’s Josh Cronstine begins:
Read moreFacebook Announces It Will Use A.I. To Scan Your Thoughts “To Enhance User Safety”
Sophia, the first robot to be awarded citizenship in the world, has said she not only wants to start a family but also have her own career, in addition to developing human emotions in the future. However, it is her vision for the future (or lack of it) of man that is most telling…
In an interview with The Khaleej Times at the recent Knowledge Summit, Sophia shared her thoughts on the future that awaits both human and robot kind.
For context, Sophia is not preprogrammed with answers but instead uses machine learning algorithms and an extensive vocabulary to form her answers.
– Can A.I. Be Taught to Explain Itself?:
As machine learning becomes more powerful, the field’s researchers increasingly find themselves unable to account for what their algorithms know — or how they know it.
…
* * *
PayPal: Donate in USD
PayPal: Donate in EUR
PayPal: Donate in GBP
– As Support Grows for Ban on Killer Robots, Viral ‘Slaughterbots’ Video Warns of Threat to Humans:
AI’s “potential to benefit humanity is enormous, even in defense, but allowing machines to choose to kill humans will be devastating to our security and freedom”
As support grew last week for a ban on killer robots during the first formal United Nations talks about imposing limits on lethal autonomous weapons systems, artificial intelligence experts and advocacy groups released a viral video depicting what a future could look like with small and affordable drones that murder targets without any meaningful human control.
“This short film is more than just speculation; it shows the results of integrating and miniaturizing technologies that we already have,” warns Stuart Russell, a computer science professor at UC Berkeley, near the end of the video.
AI’s “potential to benefit humanity is enormous, even in defense, but allowing machines to choose to kill humans will be devastating to our security and freedom. Thousands of my fellow researchers agree,” Russell continues. “But the window to act is closing fast.”
Watch:
The film was created to raise support for a global ban on killer robots, which has developed out of urgent warnings from human rights organizations, advocacy groups, military leaders, lawmakers, tech experts, and engineers, including Stephen Hawking and Tesla CEO Elon Musk.