Cambridge AI Social held its inaugural meeting at the Bradfield Center last week, with keynote speaker Professor Lawrence Paulson outlining the challenges – and opportunities – with the development of AI and AI.
The new set of networks was created by Aaron Turner in 2022. An independent researcher at AGI since 1985, Aaron has welcomed guests who have made it through the blizzard outside.
“In the mid-1960s, Joseph Weisenbaum created Eliza, the first-ever chatbot,” Aaron told the audience of about 50 in the auditorium at The Bradfield. “At the time, most people saw Elisa as being smarter than she actually was, which today is known as the Elisa effect.”
It turns out that the Eliza Effect is still at work in Cambridge in 2023: One of the main topics of the evening’s entertainment was the analysis and discussion of ChatGPT, the chatbot launched by OpenAI in November last year. So is ChatGPT smart?
“ChatGPT has had more coverage than any other AI topic in the past 40 years,” Aaron said in his foreword, “but under the hood is a model with big language. It’s like a dictionary, where all the words are linked together — and in that circular circle there is no A real meaning, as none of the words are associated with real-world experiences.
So with large language models there can be no real intelligence or cognition – it’s basically just an illusion of intelligence.
“These models provide some utility but from an artificial general intelligence (AGI) perspective they don’t have an IQ. Intelligence has three components: induction, inference, and hijacking. Deduction has a history of 2,500 years and is better understood as the Automated Theory Proof, or ATP. ATP Connected with AI.If the computer-based model achieves sufficient performance, it can perform real deductive reasoning…
“So let me introduce, living and colouring, the only, wonderful Professor Lawrence Paulson.”
Professor Paulson is Professor of Computational Logic at the University of Cambridge and Director of Research in the Computer Lab at the William Gates Building.
“Logic, logic technology, and some of its implications for artificial intelligence are what we’ll be looking at in the next hour,” he told the audience.
A few slides later – “to prove there are many stages” in AI – Professor Paulson says: “So what about intelligence? As Aaron said, it means induction, inference or abduction and I wouldn’t mention abduction at all. So, induction: I notice the stars moving across The night sky night after night, and there are other bodies and I’m trying to understand where they will be next – that’s inductive reasoning. Even plants grow toward light – that’s inductive reasoning: ‘There was light here yesterday, and that’s where the light will be next’.
It is claimed that people can get by without using deductive reasoning.
“I don’t think even very intelligent people use deductive reasoning much,” suggests Professor Paulson. “Even Warren Buffett reads all the business figures and does various calculations, based at least in part on intuition, which is inductive reasoning. For some stocks, he’ll just say, ‘This doesn’t look good.’”
Professor Paulson’s advice is “Don’t try to prove an entire theory at once”. Prove it a little at a time. Artificial intelligence gets it wrong more often than you might think.
“As we know ChatGPT is often wrong,” Professor Paulson continued. “I asked about myself and she gave me all sorts of activities and things that I haven’t actually done but someone would still do, so that’s a good start.”
After the laughs die down, it’s time for the questions and answers. Someone in the audience suggests that ChatGPT is “not so much like a monkey trying to write Shakespeare but more like a 10-year-old trying to write a story – it’s better than monkey is everything”.
Professor Paulson replied, “Yes.” “It is a matter of brute force or not brute force?”
The brute force theory of computation which states that you just have to keep doing exhaustive searches to get results. The alternative is to find some way to reduce the amount of computation necessary to arrive at a result. Either way, Professor Paulson recommends that programmers avoid using C++ as a computing language that is “definitely not a way to build a trustworthy system”.
He continued, “This is early work [on ChatGPT] And if you look at computer chess, it’s been beaten by computers doing giant searches, so brute force has some advantages.”
Is ChatGPT some kind of threat to academic work, another audience member asks.
As for the idea that people are going to use ChatGPT to write articles, well what’s the point because it needs close scrutiny anyway, so you can do it yourself. ChatGPT is usually very generic and very nice – although you can ask it to write in Jane’s style Austen or the very nice HP Lovecraft.”
Next up was pizza, and I found myself saying hello to Paul Crane, the new CW CEO, among others.
While communicating, I made a quick vox pop on C++. Is it really as bad as Professor Paulson claims?
It does the job. I prefer R and C. You can compare it to learning French,” said Charlie Gao, of Hibiki AI at Cambridge Science Park.
“If you are good at French, this does not mean that you will be good at Russian. Fortran is definitely faster.”
Standing next to Charlie, Harry Little of the CAIS team added: “A lot of people in C have said that C++ was just a tool for raising money.”
I never liked C++, I like C, but it’s [Prof Paulson] He wasn’t saying that C++ is bullshit, he was saying that anything programmed in a standard language can’t be trusted because it was programmed by a human and humans can make mistakes, which is problematic.”
I tried to imagine what a programming language untouched by human hands – unconceived by a human mind – would look like, but these are complex calculations.
Aaron said after the event: “I think we had 52 people there in all, which is a respectable result for a first event, and certainly enough for a decent party in the socializing part of the evening.
“Professor Paulson’s lecture was very well-crafted and expertly delivered – he’s a really great teacher. The audience asked a good number of questions at the end of the talk, which is a good indication that they were genuinely engaged, and therefore enjoyed it.
“It seemed to me that there was a real commotion during the socialization phase, with several people staying for a full hour – again, a good indication that the audience really had a good time, and that’s what it’s all about in the end.
“A large number of people came and thanked me profusely at the end, and promised me to attend the subsequent events.
“Our next event ON APRIL 14 AT THE WEST HUB: Professor Manuela Veloso, Herbert A Simon Professor Emerita at Carnegie Mellon University, and Head of Artificial Intelligence Research at JP Morgan Chase, will deliver a talk on robotics.”
DISCLAIMER:- Denial of responsibility! olorinews.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email at email@example.com The content will be deleted within 24 hours.