Himalaya-The Podcast Player

4.8K Ratings
Open In App
title

Data Skeptic

Kyle Polich

176
Followers
392
Plays
Data Skeptic

Data Skeptic

Kyle Polich

176
Followers
392
Plays
OVERVIEWEPISODESYOU MAY ALSO LIKE

Details

About Us

The Data Skeptic Podcast features interviews and discussion of topics related to data science, statistics, machine learning, artificial intelligence and the like, all from the perspective of applying critical thinking and the scientific method to evaluate the veracity of claims and efficacy of approaches.

Latest Episodes

GANs Can Be Interpretable

Erik Härkönenjoins us to discuss the paperGANSpace: Discovering Interpretable GAN Controls. During the interview, Kyle makes reference to this amazinginterpretable GAN controlsvideo and it’s accompanying codebase foundhere. Erik mentions theGANspace collab notebookwhich is a rapid way to try these ideas out for yourself.

26 MIN13 h ago
Comments
GANs Can Be Interpretable

Sentiment Preserving Fake Reviews

David Ifeoluwa Adelanijoins us to discussGenerating Sentiment-Preserving Fake Online Reviews Using Neural Language Models and Their Human- and Machine-based Detection.

28 MIN4 d ago
Comments
Sentiment Preserving Fake Reviews

Interpretability Practitioners

Sungsoo Ray Hongjoins us to discuss the paperHuman Factors in Model Interpretability: Industry Practices, Challenges, and Needs.

32 MIN2 w ago
Comments
Interpretability Practitioners

Facial Recognition Auditing

Deb Rajijoins us to discuss her recent publicationSaving Face: Investigating the Ethical Concerns of Facial Recognition Auditing.

47 MIN3 w ago
Comments
Facial Recognition Auditing

Robust Fit to Nature

Uri Hassonjoins us this week to discuss the paperRobust-fit to Nature: An Evolutionary Perspective on Biological (and Artificial) Neural Networks.

38 MINJUN 12
Comments
Robust Fit to Nature

Black Boxes Are Not Required

Deep neural networks are undeniably effective. They rely on such a high number of parameters, that they are appropriately described as “black boxes”. While black boxes lack desirably properties like interpretability and explainability, in some cases, their accuracy makes them incredibly useful. But does achiving “usefulness” require a black box? Can we be sure an equally valid but simpler solution does not exist? Cynthia Rudinhelps us answer that question. We discuss her recent paper with co-author Joanna Radin titled (spoiler warning)… Why Are We Using Black Box Models in AI When We Don’t Need To? A Lesson From An Explainable AI Competition

32 MINJUN 6
Comments
Black Boxes Are Not Required

Robustness to Unforeseen Adversarial Attacks

Daniel Kangjoins us to discuss the paperTesting Robustness Against Unforeseen Adversaries.

21 MINMAY 30
Comments
Robustness to Unforeseen Adversarial Attacks

Estimating the Size of Language Acquisition

Frank Mollicajoins us to discuss the paperHumans store about 1.5 megabytes of information during language acquisition

25 MINMAY 23
Comments
Estimating the Size of Language Acquisition

Interpretable AI in Healthcare

Jayaraman Thiagarajanjoins us to discuss the recent paperCalibrating Healthcare AI: Towards Reliable and Interpretable Deep Predictive Models.

35 MINMAY 15
Comments
Interpretable AI in Healthcare

Understanding Neural Networks

What does it mean to understand a neural network? That’s the question posted onthis arXiv paper. Kyle speaks withTim Lillicrapabout this and several other big questions.

34 MINMAY 9
Comments
Understanding Neural Networks

Latest Episodes

GANs Can Be Interpretable

Erik Härkönenjoins us to discuss the paperGANSpace: Discovering Interpretable GAN Controls. During the interview, Kyle makes reference to this amazinginterpretable GAN controlsvideo and it’s accompanying codebase foundhere. Erik mentions theGANspace collab notebookwhich is a rapid way to try these ideas out for yourself.

26 MIN13 h ago
Comments
GANs Can Be Interpretable

Sentiment Preserving Fake Reviews

David Ifeoluwa Adelanijoins us to discussGenerating Sentiment-Preserving Fake Online Reviews Using Neural Language Models and Their Human- and Machine-based Detection.

28 MIN4 d ago
Comments
Sentiment Preserving Fake Reviews

Interpretability Practitioners

Sungsoo Ray Hongjoins us to discuss the paperHuman Factors in Model Interpretability: Industry Practices, Challenges, and Needs.

32 MIN2 w ago
Comments
Interpretability Practitioners

Facial Recognition Auditing

Deb Rajijoins us to discuss her recent publicationSaving Face: Investigating the Ethical Concerns of Facial Recognition Auditing.

47 MIN3 w ago
Comments
Facial Recognition Auditing

Robust Fit to Nature

Uri Hassonjoins us this week to discuss the paperRobust-fit to Nature: An Evolutionary Perspective on Biological (and Artificial) Neural Networks.

38 MINJUN 12
Comments
Robust Fit to Nature

Black Boxes Are Not Required

Deep neural networks are undeniably effective. They rely on such a high number of parameters, that they are appropriately described as “black boxes”. While black boxes lack desirably properties like interpretability and explainability, in some cases, their accuracy makes them incredibly useful. But does achiving “usefulness” require a black box? Can we be sure an equally valid but simpler solution does not exist? Cynthia Rudinhelps us answer that question. We discuss her recent paper with co-author Joanna Radin titled (spoiler warning)… Why Are We Using Black Box Models in AI When We Don’t Need To? A Lesson From An Explainable AI Competition

32 MINJUN 6
Comments
Black Boxes Are Not Required

Robustness to Unforeseen Adversarial Attacks

Daniel Kangjoins us to discuss the paperTesting Robustness Against Unforeseen Adversaries.

21 MINMAY 30
Comments
Robustness to Unforeseen Adversarial Attacks

Estimating the Size of Language Acquisition

Frank Mollicajoins us to discuss the paperHumans store about 1.5 megabytes of information during language acquisition

25 MINMAY 23
Comments
Estimating the Size of Language Acquisition

Interpretable AI in Healthcare

Jayaraman Thiagarajanjoins us to discuss the recent paperCalibrating Healthcare AI: Towards Reliable and Interpretable Deep Predictive Models.

35 MINMAY 15
Comments
Interpretable AI in Healthcare

Understanding Neural Networks

What does it mean to understand a neural network? That’s the question posted onthis arXiv paper. Kyle speaks withTim Lillicrapabout this and several other big questions.

34 MINMAY 9
Comments
Understanding Neural Networks
hmly
Welcome to Himalaya LearningDozens of podcourses featuring over 100 experts are waiting for you.