The event kicked off with three speakers. Mike Stevens, founder and editor of Insight Platforms, on generative AI and the research cycle; Arca Blanca's Alasdair Ramage, on how market leading companies are applying AI to the marketing cycle and Dr Chandrima Ganguly, data and AI ethics at Lloyds Banking Group on using generative AI responsibly.

Examples of AI in the research cycle

Mike gave us ten examples of how AI tools can play out across the research cycle. For the relative novices among the audience, it provided a valuable masterclass of what such tools could do. Yes, there was a touch of Stephen King about his presentation with the need for robot moderators by some applications but it also provided reassurance that currently there is a "near need" for a human to be in the loop.

Insights into organisations using AI

Alasdair gave us an insight into how organisations are already using AI, building their own data on top of Big Data. He encouraged us to experiment and play, promising that we would see the benefits of doing so in the form of productivity enhancements and time savings. He also advised that any organisation which has processes in place to go through cycles quickly is better off as it can learn and iterate better.

Using AI responsibly

Finally, Dr Ganguly drew this part of the day to a close by giving their view on Using generative AI responsibly. If you had been confused about how Large Language Models (LLMs) worked, this session would put you right. It demonstrated how LLMs are trained and built; revealed their inbuilt biases and what steps we can take to de-bias the outputs, a key aspect. Dr Ganguly also offered practical tips around providing relevant social and cultural context, previous data and the specifics of the outputs we need.

Enter the hackathon

And then came the hackathon, where attendees were encouraged to explore a selection of tools and platforms (many thanks to our platform partners: Jack from CoLoop; Maria from Civicom; Paul Eric and Ray from Qualzy) that may change everything we do now and in the future. Writing this on the day that The Sunday Times publishes an article about how AI needs a safety switch, it's interesting that though researchers on the day relished playing with the tools on offer, with the benefit of distance a note of caution crept in.

Notes of caution

Take a look at Lucy Hobbs' and Stephanie Holland's posts on LinkedIn. Lucy, for instance, talks about how AI platforms can help give qual researchers a head start, and free up time for "clever thinking" but she warns, too, that AI is only as good as the question you put in and the danger of flattening the insight. Stephanie, meanwhile, reminds us what is great about qualitative research, and how harnessing AI tools should not limit the breadth of our perspective as they can't (yet) spot outliers or what makes humans human.

Take-aways from the day

The favourite takeaway from the day seems to be "AI won't be taking your job any time soon, but someone who uses AI every day might". As a journalist and publisher, I've seen first hand this week how AI can make things a lot easier (as long as you don't forget its limitations and the origins of its language).