LFP063 – Special Episode! A Deep-Dive into AI, Machine Learning, Big Data & FS w/Dr Tristan Fletcher Research Director Thought Machine

ThoughtMachine_Podcast_FINAL

I am delighted to be joined today by Dr Tristan Fletcher Research Director at Thought Machine to talk about Artificial Intelligence and Machine Learning in Financial Services. His Linkedin strapline is “I apply state of the art prediction methods from the Machine Learning (Artificial Intelligence) academic community to real world problems”

Tristan FletcherThis is a mega hot topic right now and also one all too often hyped beyond all belief and frankly credibility.

For some time I have been on the lookout for someone with real depth of knowledge in this area. When attending and also moderating a recent InSync event at the Tate Modern  I was impressed by Tristan’s deep understanding and also groundedness – the basic angle (or to be more precise my takeaways on his views ) being “all this stuff has been known to academics for a long time” and “many aspects of it are exaggerated and in many ways it has become more of a marketing tool”.

So the point of this show is to put it all back in proportion.

What is AI? I mean what is it really when one isn’t reading it in some journalistically overblown tech piece. What is machine learning? Are neural networks like the brain? How big is big data and does it contain all the answers? All these and more we will cover and I hope and am sure that you will go away far better briefed than before.

Tristan spans academia , FS and entrepreurialism. Academically he has studied at, or had fellowships with, Cambridge, Imperial, UCL and Sussex. FS-wise he has worked at senior levels in prestigious organisations in asset management and trading. Entrepreneurially and have worked with / founded start-ups in these domains.

Oh yes and apart from all this he has also applied machine learning to medicine (Imperial College London), supply chain management (Unilever) and even fine wine pricing.
So it’s another show where my teams of researchers have found me a guestwho is a collousus bestriding his world. And as always its another great show for my analyst who will have to hear me moaining more about my own punyness in the face of greatness.

Plenty discussed including: – what is intelligence? Is it intelligent to drink too much prosecco?

– how can we define AI if we can’t define what human intelligence clearly?

– various ways that AI might be defined and the difficulty of pinning it down

– AI’s history and implicit definition of “intelligence” but these kind of problems (maths/chess) turned out to be far easier than walking or talking

– AI techniques and tools have been around for a long time it’s more that computing power has increased and become more widely available in recent years

– the long term “winters and summers” of AI; the current hype cycle having been foreshadowed in the past – inflated claims being followed by disappointment and relative lack of interest

– neural networks evolution into deep learning

– intelligence versus wisdom

– dropping the label of “intelligence” and reframing as interesting new things to do with a computer

– AI versus Machine Learning and where the latter term came from

– Machine Learning has three broad categories – supervised learning, unsupervised learning and reinforcement learning; what these are and example uses

– “interpretability” its importance and implications

– an example of a deep learning approach

– the importance of generalisation – “pattern spotting” (ie “learning”) is always based on a sample but what matters is how well that pattern applies on new unseen data.

– the importance of Occam’s razor in this context; overfitting

– “precisely wrong versus roughly right”

– credit analysis and machine learning; the difficulty of the rare rate of credit events (which may take years to mature (unlike eg FX which changes every minute)); credit rating agencies are good at predicting good credits but bad for bad ones for precisely this reason

– the importance of having the data in the first place

– social data in Alipay’s approach

– “consistency is the last refuge of the unimaginative” [Oscar Wilde]

– Big Data and its real roots in state supervision of us all

– the ease of taking lots of data and throwing it at lots of tools – and the folly of that…

– what does “big” mean in “big data”?! The error of assuming “n = all”. The more data folks have the more that they are emboldened in this regard. More data can simply amplify the problems in your model not solve them.

– It’s The Data Stupid – hence Google, Amazon and Facebook’s lead in this area (plus they have recruited many of the world’s best Machine Learning/Data scientists)

– a comparison of spreadsheets having been a specialist skillset in the past but now everyone uses it; a similar thing is happening in machine learning

– the vital importance of domain knowledge

– the fetish of Big Data meaning that often one is looking for needles in ever larger haystacks

– data mining and the impossibility of removing entirely your own biases

– everything is not data driven (the future is often different from the past)

– machine learning is not a panacea but an augmentative process

– Google Photo Assistant’s automated video editing facility – an example of a video of Insync created “automatically” – editing looks pretty good eh? Now all they need to improve is their automated music selection 😀

– putting everything into context machine learning is being applied for very important tasks – ambulances as an example of a real world use case

– predicting fine wine prices; “trying to predict any market’s prices (above and beyond trend following) is very very hard”

– the reasons markets are fundamentally hard to predict

– some techniques – neural networks, spatio-temporal gaussian processes; multi-task learning; dimensionality reduction techniques; genetic algorithms/programming

– using machine learning to predict cardiac problems and longevity (using a dataset of 30,000 points in the heart over time)

– supply chain problems – eg a soup factory – genetic algorithms; an analogy with recipe making

– capital markets and machine learning; compliance use cases; complaints handling; cost savings

– Thought Machine – history, products and direction; from B2C projects (eg consumer spending patterns/behaviour) to now more B2B core bank system

– the nature of Thought Machine’s technology and lower cost thereof

– real-time Treasury reporting functions; Basel 3 reporting et al can be done real time and forecast using machine learning – things that can take others months to do

And much much more 🙂

Share and enjoy!