Lots of good discussions at the O’Reilly AI Conference today; a few that stood out:
Ruchir Puri, IBM Watson
- I was VERY glad to hear someone from IBM discuss the virtue of domain expertise, something that I discuss often.
Jana Eggers, Nara Logics
- ‘Whose black box do you trust?’ good discussion on the rational fears and concerns re:AI.
- Accountability should be higher priority. Deep fakes, etc…
- If we don’t know what is ‘true’ how do we trust AI with respect to recommendations
- Talk highlights points raised in Puri’s talk — using example of computational chemistry, biological extrapolation & neural nets/deep learning. Need time for exploratory analysis, and also expert systems to guide algo development
- The way our brain works is not about math
- Always a pleasure hearing Hilary’s thoughts
- You can’t build technical product serving people without deep technical understanding
- Again, domain importance echoed. (goes beyond technology)
- Let’s make AI boring
- Shift from ‘Big Data’ to doing real science in data science (my par)
- What could you do with all of the weather data in the world? DarkSky example. Great to see weather data, and what we would/could/should do with Wx data in the AI discussion
Overall good talks & discussions throughout the day, but my biggest concerns, as it always when dealing with AI, DS, ML & related topics, are:
- we think we can predict better with more precision in all verticals,
- we are good/getting better wrt causal inference, and
- more/faster/granular data = ‘better predictions’.
Need to be mindful that more precision = more risk. It is perhaps timely that Carlo Ratti of MIT Senseable City Lab giving a talk at a separate conference, quotes Popper, calling for a slightly different approach to AI in the context of cities:
Models still have utility, but we should use as a guide to explore possibilities, and always quantify risk. The use AI & suite of associated tools/methods should help us prepare for, shape, and better react (the future), rather than predict. Robust methodologies are better suited for engineered systems that build in protection through redundancy (mimics evolution) rather than solutions dependent upon human behavior.