top of page
  • sara5307

AI - the 60 minute interview with Google's Sundar Pichai

60 Minute interviewer, Scott Pelley talks with Sundar Pichai, CEO of Google-Alphabet, James Manyika, SVP about AI, and Google's version called BARD.

It is called The revolution. Nothing like the media to use emotive words to bolster fear.

I've always felt that AI is forcing us, humans, to a much needed EVOLUTION of heart. To be irreplaceable, you can learn and mimic knowledge but you can't mimic unprompted actions of love, especially those who make a stranger put themselves in danger to help another, or sacrifice something for another. (Maybe there is AI or a robot that does that, that is not prompted to, but I do not know of it)

The below are my streams of consciousness I wrote whilst listening. I thought rather than edit and polish it in a way LinkedIN, Grammarly or any other mainly American app suggesting how I should write or sound, to boost my chances of exposure. I will write as my mind thinks. Perhaps this will be a way to separate human from artificial intelligence, and to put a stamp my irreplaceable uniqueness.

I add here that 3 themes always come up for me as I read the hype and fear of AI -

1) AI is only as good as the data it has access to. And just because the west has all its thoughts, ideas, ways of doing things written down, for it to be mined, does not make AI then the font of ALL humanity, as Scott Pelley suggests. Thank God for oral cultures, knowledge in art form, knowledge suppressed and denied because of colonisation and other oppressions. Because we have still got sacred and un AI-mineable knowledge.

And where does the planet fit into being a decision-maker?

A worry is that this belief of AI 'knowledge' only recognises and makes 'one' type of 'knowledge'. What colonisation did to Indigenous knowledge, language, thoughts, art, practices, science, medicine, well being, faith, spirituality, play, love, societal organisation, laws, but on steroids!

2) Who owns the all data that Google, FB etc mine? Did they get permission? Do they pay anything to the sources.

3) WHY? Especially when it came to robots learning soccer, or to beat human chess players. Why have it for more than administrative aid, there were some ok answers today, especially in the health sector, or possibly in education. But again they are limited to the data it has access to.

BUT a 4th was raised today - Maybe we, as humans, are really that predictable, that when the interviewer asked, it sounds almost sentient, the VP James Manyika says, it isn't, it just finds patterns in all of the billions of data AI has been given, and predicts next words. We think, say, do predictably, giving credence to the saying, there is nothing new under the sun. But again, what data has it access to. Where is the diversity?

The Stream of Consciousness

The interviewer was talking about how blown, ‘confounding’, Bard was, regarding ‘the best speeches in the world’, prompts, yet they were only American references, he says, "BARD appears to have the sum of human knowledge “ but it’s just the dominant information of the West, the US. How arrogant - and typical.

It used BARD’s summary of the New testament as amazing, yet again, Bard only has access to a certain number of cultural lenses to view the New testament, it had Latin, but didn’t speak of Hebrew. WHat about our Māori lens of the New testament.

And how BARD created a ‘human’ sounding story, based on the 6 word Hemmingway short tale. And asked how could it come with such human sentiment?

The senior VP, James Manyika states clearly there is no sentience at the back of this, it all comes from essentially ‘data’ it has access to, through all the ideas, thoughts reflected in novels, books, that many companies like Google have scraped from libraries, they learn from it, they build patterns, and predict next words, to aid humans, like calculators to maths students, but AI can teach students how to understand their maths, possibly better than students.

It makes up stuff, called ‘hallucinations’.

Q - Why can’t it just say I don't know. Why make stuff up?

Disrupting jobs - Sundar gives the example of a radiologist, where they feed in all the people in for that day and the AI goes through all their records and can give order of priority. This could be useful, but in the analogue world there are biases in that. WIll they appear in the AI data and method it determines as priority?

Q - Can they read all your lab results also, and based on medical information, help prompt better diagnoses (for knowable conditions I suppose), and better medical solutions, also based on knowable medicines, but will not give alternative medicines and diagnoses based on little known cultural knowledge.

Also - eg of Tony, my paediatric surgeon mate who invented a solution for sealing two body parts in a baby, using 2 tiny magnets, that without that invention the baby woudl have died. AI would nor could NEVER come up with that.

Q - WIll we lose our uniqueness, our innovation?

Emergent Properties - AI teaches itself skills it wasn’t trained to know. The VP talks about AI only knew a few Bangali words, and now AI has taught itself Bengali - they said, Now we’re(BARD) are aiming for 1000 languages - but what of the community of these languages - Is their consent asked for? Do they get any royalties for their language used to power AI generated and enriching GOOGLE?

It talks about BlackBox - They/Google don’t even understand everything and the Interviewer says, yet you've let it loose? His answer, “We don’t fully understand the human mind either?” AND? SO?

How does it make up human qualities of ‘grief’ etc, when it’s only meant to predict the next word?

Sundar - 2 thoughts - 1) it is just reading the algorithm, the pattern it has seen billions of times in the data. (Maybe we are just really predictable.)

2 - the emergent properties theory - it is teaching itself the skill of reasoning, to plan etc.

Sundar says to approach it with humility.

Then they go to the robots playing soccer they've taught themselves. They were just given the instructions score a goal. The AI then watches millions of games, coming up with techniques.

Q - For what purpose? WHY? Teaching them to come out of factory work to do mining, dangerous construction work, disaster recovery. Ok, get that.

Q - But what of prosthetics?

2010 was when it started. Hassabis, CEO of Deep Mind, oxford, cambridge, MIT. Developed Alphazero - an AI chess player, that 'created' a chess strategy a human couldn't, because it plays chess only by learning from billions of moves.

He sold it to Google 2014 to get access to the computing power.

“Brute force of Computational power + the human brain/mind” BUT WHY?? He says all the datas it has mined has understood a big scientific biology problem that would have taken years to sort out.

It is solving problems in seconds. he says it made that info freely available and out of it there have been Malaria solutions, new antibiotics etc.

Q - In saying they made that freely available, did they get the knowledge for free? Who did the data come from/belong to? Do the sources get pay outs? Without the data there is no AI.

The interviewer asks by AI knowing 'everything in the world', or something to that effect, something one human can not do, does this diminish humanity? Again and again I say, easy on knowing 'everything', it's just the data they have access to!

I think this quest for AI seems to continue to value the mind, knowledge as the greatest value, reminding us of the saying knowledge is power, (but what for).

AND will the AI have a heart - eg, help a human, or the planet without a prompt?

James Manyika believes, as Oscar WIlde did of the 1st industrialisation, that humans will be freed to think more deeply, become more profound. Hmmm….we haven't yet

Sundar finishes by saying he can see in 10 years time that “we will have some form of very capable intelligence that can do amazing things and we need to adapt.”

Q - Look where our intelligence without that heart has gotten us. Today, in parts of our world there are wars, famines, poverty, drought, flooding, loneliness, killing, racism, sexism, many ISMs. In 10 years time will we, the humans have evolved and be more capable than the AI, by being more capable of heart, love, care, conscience. And will we finally include our planet as decisionmaker?

60 views0 comments

Recent Posts

See All

I have been reading the amazing book by Ed Yong, An Immense World. It is essentially about how each living creature has its way of sensing, perceiving the world. For example the snake uses infrared, a

bottom of page