Hệ thống đang bảo trì rà soát thông tin

Xi jinping là ai

      231

Xi Jinping is using artificial intelligence khổng lồ enhance his government’s totalitarian control—và he’s exporting this giải pháp công nghệ khổng lồ regimes around the globe.


*

As rulers of some of the world’s largest complex social organizations, ancient Chinese emperors well understood the relationship between information flows và power, and the value of surveillance. During the 11th century, a Song-dynasty emperor realized that China’s elegant walled cities had become too numerous khổng lồ be monitored from Beijing, so he deputized locals khổng lồ police them. A few decades before the digital era’s dawn, Chiang Kai-shek made use of this self-policing tradition, asking citizens to watch for dissidents in their midst, so that communist rebellions could be stamped out in their infancy. When Mao took over, he arranged cities inlớn grids, making each square its own work unit, where local spies kept “sharp eyes” out for counterrevolutionary behavior, no matter how trivial. During the initial coronavirus outbreak, Chinese social-truyền thông apps promoted hotlines where people could report those suspected of hiding symptoms.quý khách hàng đang xem: Xi jinping là ai

Xi has appropriated the phrase sharp eyes, with all its historical resonances, as his chosen name for the AI-powered surveillance cameras that will soon span China. With AI, Xi can build history’s most oppressive authoritarian apparatus, without the manpower Mao needed to lớn keep information about dissent flowing khổng lồ a single, centralized node. In China’s most prominent AI start-ups—SenseTime, CloudWalk, Megvii, Hikvision, iFlytek, Meiya Pico—Xi has found willing commercial partners. And in Xinjiang’s Muslyên ổn minority, he has found his thử nghiệm population.

Bạn đang xem: Xi jinping là ai

The Chinese Communist Party has long been suspicious of religion, and not just as a result of Marxist influence. Only a century & a half ago—yesterday, in the memory of a 5,000-year-old civilization—Hong Xiuquan, a quasi-Christian mystic converted by Western missionaries, launched the Taiping Rebellion, an apocalyptic 14-year campaign that may have sầu killed more people than the First World War. Today, in China’s single-tiệc ngọt political system, religion is an alternative sầu source of ultimate authority, which means it must be co-opted or destroyed.

By 2009, China’s Uighurs had become weary after decades of discrimination and land confiscation. They launched mass protests and a smattering of suicide attacks against Chinese police. In 2014, Xi cracked down, directing Xinjiang’s provincial government to destroy mosques and reduce Uighur neighborhoods lớn rubble. More than 1 million Uighurs were disappeared into lớn concentration camps. Many were tortured and made to lớn perform slave labor.

Uighurs who were spared the camps now ảo diệu the most intensely surveilled population on Earth. Not all of the surveillance is digital. The Chinese government has moved thousands of Han Chinese “big brothers & sisters” into lớn homes in Xinjiang’s ancient Silk Road cities, to lớn monitor Uighurs’ forced assimilation to lớn mainstream Chinese culture. They eat meals with the family, và some “big brothers” sleep in the same bed as the wives of detained Uighur men.

Meanwhile, AI-powered sensors lurk everywhere, including in Uighurs’ purses và pants pockets. According khổng lồ the anthropologist Darren Byler, some Uighurs buried their di động phones containing Islamic materials, or even froze their data cards inkhổng lồ dumplings for safekeeping, when Xi’s chiến dịch of cultural erasure reached full tilt. But police have sầu since forced them khổng lồ install nanny apps on their new phones. The apps use algorithms khổng lồ hunt for “ideological viruses” day & night. They can scan chat logs for Quran verses, and look for Arabic script in memes và other image files.

Uighurs can’t use the usual work-arounds. Installing a VPN would likely invite an investigation, so they can’t download WhatsApp or any other prohibited encrypted-chat software. Purchasing prayer rugs online, storing digital copies of Muslim books, and downloading sermons from a favorite imam are all risky activities. If a Uighur were lớn use WeChat’s payment system lớn make a donation khổng lồ a mosque, authorities might take note.

The nanny apps work in tandem with the police, who spot-kiểm tra phones at checkpoints, scrolling through recent calls và texts. Even an innocent digital association—being in a group text with a recent mosque attendee, for instance—could result in detention. Staying off social truyền thông media altogether is no solution, because digital inactivity itself can raise suspicions. The police are required khổng lồ note when Uighurs deviate from any of their normal behavior patterns. Their database wants to know if Uighurs start leaving their home page through the bachồng door instead of the front. It wants khổng lồ know if they spend less time talking lớn neighbors than they used khổng lồ. Electrithành phố use is monitored by an algorithm for unusual use, which could indicate an unregistered resident.


*

Jonathan Djob NkondoUighurs can travel only a few blocks before encountering a checkpoint outfitted with one of Xinjiang’s hundreds of thousands of surveillance cameras. Footage from the cameras is processed by algorithms that match faces with snapshots taken by police at “health checks.” At these checks, police extract all the data they can from Uighurs’ bodies. They measure height & take a blood sample. They record voices and swab DNA. Some Uighurs have sầu even been forced to lớn participate in experiments that mine genetic data, to lớn see how DNA produces distinctly Uighurlike chins & ears. Police will likely use the pandemic as a pretext lớn take still more data from Uighur bodies.

Uighur women are also made to lớn endure pregnancy checks. Some are forced lớn have sầu abortions, or get an IUD inserted. Others are sterilized by the state. Police are known to rip unauthorized children away from their parents, who are then detained. Such measures have sầu reduced the birthrate in some regions of Xinjiang more than 60 percent in three years.

When Uighurs reach the edge of their neighborhood, an automated system takes note. The same system tracks them as they move through smaller checkpoints, at banks, parks, & schools. When they pump gas, the system can determine whether they are the car’s owner. At the city’s perimeter, they’re forced to exit their cars, so their face & ID thẻ can be scanned again.

Read: Uighurs can’t escape Chinese repression, even in Europe

The lucky Uighurs who are able to travel abroad—many have sầu had their passports confiscated—are advised to lớn return quickly. If they vị not, police interrogators are dispatched lớn the doorsteps of their relatives & friends. Not that going abroad is any kind of escape: In a chilling glimpse at how a future authoritarian bloc might function, Xi’s strongman allies—even those in Muslim-majority countries such as Egypt—have sầu been more than happy to lớn arrest & deport Uighurs baông xã to lớn the open-air prison that is Xinjiang.

Xi seems to have sầu used Xinjiang as a laboratory khổng lồ fine-tune the sensory & analytical powers of his new digital panoptibé before expanding its reach across the mainland. CETC, the state-owned company that built much of Xinjiang’s surveillance system, now boasts of pilot projects in Zhejiang, Guangdong, and Shenzhen. These are meant to lớn lay “a robust foundation for a nationwide rollout,” according to lớn the company, và they represent only one piece of China’s coalescing mega-network of human-monitoring giải pháp công nghệ.

China is an ideal setting for an experiment in total surveillance. Its population is extremely online. The country is trang chính to lớn more than 1 billion di động phones, all chock-full of sophisticated sensors. Each one logs search-engine queries, websites visited, và Smartphone payments, which are ubiquitous. When I used a chip-based credit thẻ to buy coffee in Beijing’s hip Sanlitun neighborhood, people glared as if I’d written a kiểm tra.

All of these data points can be time-stamped và geo-tagged. And because a new regulation requires telecom firms khổng lồ scan the face of anyone who signs up for cellphone services, phones’ data can now be attached to a specific person’s face. SenseTime, which helped build Xinjiang’s surveillance state, recently bragged that its software can identify people wearing masks. Another company, Hanwang, claims that its facial-recognition technology can recognize mask wearers 95 percent of the time. China’s personal-data harvest even reaps from citizens who lachồng phones. Out in the countryside, villagers line up to have sầu their faces scanned, from multiple angles, by private firms in exchange for cookware.

An authoritarian state with enough processing power could feed every blip of a citizen’s neural activity into lớn a government database.

Until recently, it was difficult lớn imagine how China could integrate all of these data into lớn a single surveillance system, but no longer. In 2018, a cybersecurity activist hacked inlớn a facial-recognition system that appeared lớn be connected lớn the government và was synthesizing a surprising combination of data streams. The system was capable of detecting Uighurs by their ethnic features, & it could tell whether people’s eyes or mouth were open, whether they were smiling, whether they had a beard, và whether they were wearing sunglasses. It logged the date, time, & serial numbers—all traceable to individual users—of Wi-Fi-enabled phones that passed within its reach. It was hosted by Alibacha và made reference lớn City Brain, an AI-powered software platkhung that China’s government has tasked the company with building.

Xem thêm: 26 Cười Xuyên Việt Ý Tưởng, Gala Nghệ Thuật Cười Xuyên Việt

Read: China’s artificial-intelligence boom

City Brain is, as the name suggests, a kind of automated nerve center, capable of synthesizing data streams from a multitude of sensors distributed throughout an urban environment. Many of its proposed uses are benign technocratic functions. Its algorithms could, for instance, count people & cars, to help with red-light timing và subway-line planning. Data from sensor-laden trash cans could make waste pickup more timely & efficient.

But City Brain and its successor technologies will also enable new forms of integrated surveillance. Some of these will enjoy broad public support: City Brain could be trained to spot lost children, or luggage abandoned by tourists or terrorists. It could flag loiterers, or homeless people, or rioters. Anyone in any kind of danger could summon help by waving a h& in a distinctive sầu way that would be instantly recognized by ever-vigilant computer vision. Earpiece-wearing police officers could be directed to lớn the scene by an AI voice assistant.

City Brain would be especially useful in a pandemic. (One of Alibaba’s sister companies created the ứng dụng that color-coded citizens’ disease risk, while silently sending their health and travel data to police.) As Beijing’s outbreak spread, some malls & restaurants in the đô thị began scanning potential customers’ phones, pulling data from sản phẩm điện thoại carriers khổng lồ see whether they’d recently traveled. mobile carriers also sent municipal governments lists of people who had come to their thành phố from Wuhan, where the coronavirut was first detected. And Chinese AI companies began making networked facial-recognition helmets for police, with built-in infrared fever detectors, capable of sending data to lớn the government. City Brain could automate these processes, or integrate its data streams.

Even China’s most complex AI systems are still brittle. City Brain hasn’t yet fully integrated its range of surveillance capabilities, and its ancestor systems have suffered some embarrassing performance issues: In 2018, one of the government’s AI-powered cameras mistook a face on the side of a city bus for a jaywalker. But the software is getting better, và there’s no technical reason it can’t be implemented on a mass scale.

The data streams that could be fed inkhổng lồ a City Brain–lượt thích system are essentially unlimited. In addition to footage from the 1.9 million facial-recognition cameras that the Chinese telecom firm Đài Loan Trung Quốc Tower is installing in cooperation with SenseTime, City Brain could absorb feeds from cameras fastened lớn lampposts and hanging above street corners. It could make use of the cameras that Chinese police hide in traffic cones, và those strapped khổng lồ officers, both uniformed and plainclothes. The state could force retailers lớn provide data from in-store cameras, which can now detect the direction of your gaze across a shelf, và which could soon see around corners by reading shadows. Precious little public space would be unwatched.

America’s police departments have begun to avail themselves of footage from Amazon’s home-security cameras. In their more innocent applications, these cameras adorn doorbells, but many are also aimed at neighbors’ houses. China’s government could harvest footage from equivalent Chinese products. They could tap the cameras attached khổng lồ ride-tóm tắt cars, or the self-driving vehicles that may soon replace them: Automated vehicles will be covered in a whole host of sensors, including some that will take in information much richer than 2-D video. Data from a massive fleet of them could be stitched together, and supplemented by other City Brain streams, to lớn produce a 3-D Model of the thành phố that’s updated second by second. Each refresh could log every human’s location within the Model. Such a system would make unidentified faces a priority, perhaps by sending drone swarms to lớn secure a positive ID.

The model’s data could be time-synced lớn audio from any networked device with a microphone, including smart speakers, smartwatches, and less obvious Internet of Things devices lượt thích smart mattresses, smart diapers, & smart sex toys. All of these sources could coalesce into a multitrack, location-specific audio set that could be parsed by polyglot algorithms capable of interpreting words spoken in thousands of tongues. This set would be useful khổng lồ security services, especially in places without cameras: China’s iFlytek is perfecting a giải pháp công nghệ that can recognize individuals by their “voiceprint.”

In the decades to come, City Brain or its successor systems may even be able to read unspoken thoughts. Drones can already be controlled by helmets that sense & transmit neural signals, and researchers are now designing brain-computer interfaces that go well beyond autofill, khổng lồ allow you to lớn type just by thinking. An authoritarian state with enough processing power could force the makers of such software to lớn feed every blip of a citizen’s neural activity inkhổng lồ a government database. Trung Quốc has recently been pushing citizens to tải về and use a propagandomain authority tiện ích. The government could use emotion-tracking software khổng lồ monitor reactions to a political stimulus within an phầm mềm. A silent, suppressed response to a meme or a clip from a Xi speech would be a meaningful data point to lớn a precog algorithm.

All of these time-synced feeds of on-the-ground data could be supplemented by footage from drones, whose gigapixel cameras can record whole cityscapes in the kind of crystalline detail that allows for license-plate reading and gait recognition. “Spy bird” drones already swoop và circle above sầu Chinese cities, disguised as doves. City Brain’s feeds could be synthesized with data from systems in other urban areas, to lớn form a multidimensional, real-time tài khoản of nearly all human activity within Đài Loan Trung Quốc. Server farms across China will soon be able khổng lồ hold multiple angles of high-definition footage of every moment of every Chinese person’s life.

“I tell my students that I hope none of them will be involved in killer robots. They have only a short time on Earth. There are many other things they could be doing with their future.”

It’s important lớn găng tay that systems of this scope are still in development. Most of China’s personal data are not yet integrated together, even within individual companies. Nor does China’s government have sầu a one-stop data repository, in part because of turf wars between agencies. But there are no hard political barriers lớn the integration of all these data, especially for the security state’s use. To the contrary, private firms are required, by formal statute, to lớn assist China’s intelligence services.

The government might soon have a rich, auto-populating data profile for all of its 1 billion–plus citizens. Each protệp tin would comprise millions of data points, including the person’s every appearance in surveilled space, as well as all of her communications and purchases. Her threat risk to lớn the party’s power could constantly be updated in real time, with a more granular score than those used in China’s pilot “social credit” schemes, which already alặng khổng lồ give sầu every citizen a public social-reputation score based on things like social-truyền thông connections và buying habits. Algorithms could monitor her digital data score, along with everyone else’s, continuously, without ever feeling the fatigue that hit Staham officers working the late shift. False positives—deeming someone a threat for innocuous behavior—would be encouraged, in order to lớn boost the system’s built-in chilling effects, so that she’d turn her sharp eyes on her own behavior, to avoid the slighthử nghiệm appearance of dissent.

If her risk factor fluctuated upward—whether due to lớn some suspicious pattern in her movements, her social associations, her insufficient attention to lớn a propaganda-consumption ứng dụng, or some correlation known only lớn the AI—a purely automated system could limit her movement. It could prsự kiện her from purchasing plane or train tickets. It could disallow passage through checkpoints. It could remotely commandeer “smart locks” in public or private spaces, to lớn confine her until security forces arrived.

In recent years, a few members of the Chinese intelligentsia have sầu sounded the warning about misused AI, most notably the computer scientist Yi Zeng and the philosopher Zhao Tingyang. In the spring of 2019, Yi published “The Beijing AI Principles,” a manifeslớn on AI’s potential to interfere with autonomy, dignity, privacy, & a host of other human values.

It was Yi whom I’d come khổng lồ visit at Beijing’s Institute of Automation, where, in addition lớn his work on AI ethics, he serves as the deputy director of the Research Center for Brain-Inspired Intelligence. He retrieved me from the lobby. Yi looked young for his age, 37, with kind eyes and a solid frame slimmed down by blaông chồng sweatpants và a hoodie.

On the way lớn Yi’s office, we passed one of his labs, where a retìm kiếm assistant hovered over a microscope, watching electrochemical signals flash neuron-to-neuron through mouse-brain tissue. We sat down at a long table in a conference room adjoining his office, taking in the gray, fogged-in cityscape while his assistant fetched tea.

I asked Yi how “The Beijing AI Principles” had been received. “People say, ‘This is just an official show from the Beijing government,’ ” he told me. “But this is my life’s work.”

Yi talked freely about AI’s potential misuses. He mentioned a project deployed to a select group of Chinese schools, where facial recognition was used lớn track not just student attendance but also whether individual students were paying attention.

“I hate that software,” Yi said. “I have sầu lớn use that word: hate.”

He went on lượt thích this for a while, enumerating various unethical applications of AI. “I teach a course on the philosophy of AI,” he said. “I tell my students that I hope none of them will be involved in killer robots. They have only a short time on Earth. There are many other things they could be doing with their future.”

Yi clearly knew the academic literature on tech ethics cold. But when I asked hyên ổn about the political efficacy of his work, his answers were less compelling.