China and AI

As you may have heard, the Chinese government is investing quite heavily in artificial intelligence (AI), aiming to become a world leader in the field within the next decade. And some in the U.S. believe it. But if I were advising the PRC government, I would suggest that this may not be the best field for China to focus on. The old tradition of a rote memory approach to learning is still thriving in China. There is no change likely, as it is a cultural issue, not just policy. And rote memory thinking is absolutely antithetical to good AI.

As I write this in China, the emphasis on AI is in evidence in many ways. One that really struck me the other day emerged in my visit to my favorite bookstore in China, Shanghai’s Shucheng (“Book City”). Wonderful place, five or six floors, mostly in Chinese. And there it was, an entire table devoted to AI:

IMG_20180427_202052.jpgThe other side of the table, not pictured, is also completely filled with AI titles.

This put AI on an equal footing with famous programming languages such as C++ and Java, and dwarfed the stock of two or three AI titles I saw on my last visit two years ago.

And yet, my talk about AI at a Chinese university the week before exposed the major cultural obstacle I described above. The hospitality shown me was great, and people asked good questions during the Q&A at the end of the talk. But one of the questions floored me: “How can you be so passionate about your subject matter?” I believe that the questioner, a grad student, was curious about this because he himself lacked such passion. He was there to study this field mainly because of a perceived hot job market. He will learn a few AI methods, but will have no idea as to what they really mean in actual applications. Keen intuitive insight is key in applications, and the rote-memory, learn-some-recipes approach just won’t work well.

This was not an isolated incident by any means, as I have written in detail before. Indeed, even though the professor who invited me does have a genuine interest in the subject, he too wondered about my passion, saying “It’s amazing that you are still so active” (i.e. in spite of my gray hair).

With such a huge population, China does in fact have some people who are not rote-memory oriented, and who are quite creative. Unfortunately, the system does not favor them, and arguably tends to weed them out. The Chinese government is aware of this and wants very much to remedy it, but I don’t think even they realize how deep the cultural roots go on this matter.

So, fear in the U.S. that China is breathing down its neck on AI is misguided. But so is the hoopla on AI itself. What is today regarded as AI is not traditional artificial intelligence in the first place. “AI” today means machine learning, which in turn is hype about old nonparametric statistical prediction methods, applied to modern huge and complex data sets. Highly important and useful, yes, but not new.


26 thoughts on “China and AI

  1. “Machine learning” bores me to death. China could become the world’s master of it, and it would not matter in the least. OTOH I don’t see any progress in the US on the “real AI”, either. Just every barista and English major suddenly wants to be a data scientist. And the data scientists I do talk to complain that 99% of their jobs is not data science.

    BTW, just got a copy of the 1985 book, “Artificial Intelligence: The Very Idea” by John Haugeland. I got it *because* it is now a bit of an anachronism. But Haugeland, who coined the term “GOFAI” for “Good Old Fashioned AI” meaning the symbolic processing version from the 1950s through 1980s, but is himself a philosopher and no kind of techie, is overall very sophisticated in telling that story. Anyone wants an excellent exercise for an AI seminar for seniors or grad students, offer constructive criticisms of this book!


    • Back in the 1980’s AI was supposed to conquer the world in 10 or 20 years. I’m still waiting. The average person has no idea that was AI really is is a huge comparison table. The bigger the table, the higher the chance that you will have matches, or maybe not.


  2. Within four years the “aha” moment will happen in China with AI.

    So your prescient warning to the USA is in the mark.

    I wish that our political system could adapt. Sadly it can’t.


  3. The conclusion drawn in this post (about the Chinese rote learning approach as not conducive to AI work) souns like wishful thinking…

    Like anything else in Chinese sciense and cultural developmnent, time is on their side…

    Just waite a few years and than come back to re-examine this notion of lack if imagination = no success…


    Liked by 1 person

    • It really depends on how one defines success. If it means success in business, fine, the Chinese are tops in that. But if one is talking about technical innovation, as I am, then no.


      • “He was there to study this field mainly because of a perceived hot job market.”

        Is that concept not invading US tech, medical, etc.? It has been obvious to me. RARE day I see a doctor who’s genuinely interested in actually doing their job. RARE day I see a commercial engineer who’s genuinely interested in their actually doing their job. Have you not noticed incessant citing of credentials, paired with a tanking of actual accomplishments? Previously known as “resting on one’s laurels”? Along with the importation of “tiger mom”, my kid can out credential your kid. The goal, the academic credentials, not the profession, let alone innovate.

        US innovation has slid down the drain, in favor of looking for fast easy money:
        – become more profitable by wage decimation
        – collect and sell population data
        – put a touch screen on [anything] and collect more data

        Additionally, what’s in China’s favor, they’re currently instantiating Western urban infrastructure configuration, which doesn’t scale (smog, traffic jams, etc.). And so, they’ll HAVE TO innovate.


  4. Have you considered the possibility that China’s primary interest in AI may have to do with more effectively monitoring and controlling its population? You don’t really need the most sophisticated AI to police social media, listen in on phone conversations, and decrement the social credit scores of those who cross the line.


    • This has been mentioned a lot in the press, and by me on various occasions. One of reminded of Big Brother monitoring at every turn in China. The government wants to know where you have been and what you are doing. For instance, “free” WiFi at a cafe’ usually requires you to state your personal ID number. The government has publicly boasted that it has a vast facial recognition database, with high accuracy. So they indeed have their own interest in AI; but being a world leader in some technology is a matter of huge pride to them (avenging perceived past mistreatment by Western powers), so I guarantee you that their interest in AI involves much more than Big Brother activities.


  5. AI” today means machine learning, which in turn is hype about old nonparametric statistical prediction methods, applied to modern huge and complex data sets. Highly important and useful, yes, but not new.

    Beautiful description.
    And one that someone that has been in the ‘biz’ for more than a week or two would intuitively understand.
    Rules based filters and data steering and even self describing data interfaces are not a HAL 9000.
    But it sounds swell with brand spanking new ‘technologies’ like the Cloud (VAX cluster 1981) and Virtual Environments (George Pal movies from the 50’s) etc.

    And as you are aware, age and gray hair, have proved no limit on passion or innovation, In my case and i suspect many other Americans well past their Silicon Valley expiration date.



      • It’s becoming scarce Norm, as so many are ousted from the context in which much of it occurs: work and collaboration.
        Passion, now redefined as work incessant hours. I don’t know anyone who does their best work fried/burnt out.
        Innovation, now redefined as Finance “innovation” only: reduce all costs, bearing down hard on labor expense, and “externalizing” other expense – get the public to pay for it.
        One employer I had, a total scream: built an “Innovations Center”, in which employees were allowed to visit for 1/2 hour a week, which was 1/2 hour away, to innovate.
        Bizarro investor/corporate now runs “contests”, submit your tech/tech ideas and “win” $500. Prime example of Wall St’s pursuit of elusive gold: something, for nothing.


    • “brand spanking new ‘technologies’ like the Cloud (VAX cluster 1981)”
      If only the mainframe manufacturers had offered remote terminal access, eh? Which is basically “cloud” + Chrome. We could’ve skipped all this recreation with PC/networking/remote storage activity. Many industries atrophy around their business model, bringing about their demise, in this case mainframes. Yes, computing devolved, solely because of business model, only to be reinstantiated via other means/methods.
      Silicon Valley’s “expiration date” for an engineer is 30 years old, which is contributing to innovations stall. Examples of under 30 dependency include: popups recurring, spam emails recurring, Microsoft’s cyclical reskinning their antique DOS for serial retraining/recertifying/deprecating as their business model, and use of force on customers such as Apple intentionally deprecating hardware (battery inaccessible, dialing back performance upon device aging, bluetooth headphones only, etc.).


  6. I’ve always been suspicious of AI for the following reason: AI is based on rules, rules are based on what’s the “typical” or statistically common condition being monitored. This means that any outlier has some trouble with AI:
    1. Outliers tend to be viewed as “trouble” or a defect, instead of an opportunity to learn and innovate.
    2. Outliers tend to get “filtered out” as the system “learns” to adhere closer and closer to the “norm”.
    3. People tend to adjust to being considered “normal” instead of looking for better results.

    In other words, AI tends to reinforce behavior (of systems and people) to fit a designer’s assumptions or standards. It tends toward mediocrity and predictability.

    Example: AI for “self-driving cars” is going to punish the best drivers. They will lose their ability to make decisions en-route, such as choosing a different travel route, stopping along the way for a break or a side-trip, or shopping for something they had forgotten before starting the journey. People will begin to resign themselves to the decisions the software makes, and then if there is an emergency, their own driving abilities and travel knowledge will be atrophied. Asking a passive driver to suddenly “grab the wheel” and react to avoid a collision is going to be ineffective. Taking away the wheel completely is going to doom a certain percentage of people to accidents that the AI cannot avoid, but an experienced driver could have avoided.


  7. An article came out of the WSJ the other day with the title, “‘How Long Do I Have Left?’ AI Can Help Answer That Question”

    “A new algorithm developed at Stanford Medicine could help. Analyzing data from hundreds of thousands of anonymized medical records, the model predicts which patients are likely to die in the next 3 to 12 months.”

    I guess this would be “about old nonparametric statistical prediction methods, applied to modern huge and complex data sets. Highly important and useful, yes, but not new.”


  8. If the older IT workers had a place to go after Silicon Valley, things might still be rosy. My experience with government services was that both state and federal governments rely heavily on F1 “students” for interns and H1b firms for employees. I work in the greater DC area and there are at least 4 phony IT colleges that are about 100% Indian F1 students. Two were shut down recently but there are several more going up.

    I thought nothing of them until they replaced me with them at my last job. Large government contractors like Lockheed and Booz Allen rely primarily on these former students to burn through unspent agency funds each summer. Where are the gray haired ones to go if not here? Even some parts of the DOD use them.

    Has anyone done a write-up on the use of public funds to employ H1B and F1 visa holders?


    • Look at the bright side. Those new employees don’t know $$it. How long are Lockheed and Booz going to survive on legacy systems?


      • For a VERY long time.
        H1B exploded over Y2K using COBOL as an excuse, very few COBOL engineers – and the H1Bs were not COBOL literate either. ANY excuse will do.


  9. It really bothers me that when the press does an article about AI, they leave it to the imagination of the reader about what they are referring to.They do not really define it. However, if you study what they are talking about closely it breaks down into two things: 1)Automation or 2) Machine Intelligence(consciousness/creativity).

    Automation has been going on since the beginning of the Industrial Revolution. At that time, I do not think the most radial Luddite thought that the steam engine was capable of conscious awareness. Driver-less cars seem impressive until you realize that driving a motor vehicle is a very narrow intellectual logical task(if-then-else) that does not require creativity or conscious awareness.

    Artificial consciousness is something altogether different. No one has yet demonstrated a man made artifact that can demonstrate conscious awareness or creativity.No computer program has conducted an Einstein thought experiment, painted a Picasso or wrote a Shakespearean play.

    Someday I am sure it will be achieved but it will not be achieved by code flippers at Microsoft, Google Facebook. These guys will be a footnote in history by the time it is achieved.

    This hype is just their way of repackaging technology that’s been around for decades and deceiving unsophisticated people(politicians) into giving them more H1Bs and R&D tax credits.


    • Driving capacity has been greatly exaggerated. Google’s, an accident in a parking lot going 2 mph. Uber’s, killed a pedestrian. Testla had to pull a lot of robotics out of their manufacture and back down their production estimates to less than half as they discovered the capability of the automation was conflated.
      The big flaw, it’s written by humans, who have a tiny subset of use cases in their heads, and struggle to get even those coded correctly.


      • Wow, you’ve really hit on a succinct description, “a tiny set of use cases.”

        That’s why I didn’t even want antilock brakes when they first came out, though these days one has no choice.


        • Very low standards in commercial software, Norm. Much of it is frail. The multitudes of missed use cases by new grads and poor engineers, iterated are referred to as ‘corner cases’ by management when they ship it anyways. In the case of Uber, deadly.


  10. A great blog Norman. Do you think the revival of artificial neural networks (deep learning) will lead us to artificial intelligence eventually?
    Researchers and scientists are abandoning traditional machine learning and statistical methods in favor of deep learning.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s