{"id":1714,"date":"2016-05-19T15:27:54","date_gmt":"2016-05-19T14:27:54","guid":{"rendered":"http:\/\/blogit.itu.dk\/ethos\/?p=1714"},"modified":"2016-05-27T08:38:35","modified_gmt":"2016-05-27T07:38:35","slug":"publicethos-8-how-machine-learning-differentiatesdiscriminates-some-legal-and-philosophical-explorations","status":"publish","type":"post","link":"https:\/\/blogit.itu.dk\/ethoslab\/2016\/05\/19\/publicethos-8-how-machine-learning-differentiatesdiscriminates-some-legal-and-philosophical-explorations\/","title":{"rendered":"publicETHOS #8: Katja de Vries on &#8220;How machine learning  differentiates\/discriminates &#8211; some legal and philosophical  explorations&#8221;"},"content":{"rendered":"<p><em>What do machine learning algorithms see when they look at us? Some concerns\u00a0about the transparency and discriminatory effects of profiling based on\u00a0machine learning.<\/em><\/p>\n<p>In this presentation I will show how machine learning, when it is used to\u00a0make sense of human behavior and characteristics (\u2018profiling\u2019), can lead to\u00a0infringements in terms of privacy, data protection and antidiscrimination\u00a0law. One major concern from the perspective from data protection law is the\u00a0question how to create useful transparency about the functioning of machine\u00a0learning algorithms. I illustrate some of the issues related to transparency\u00a0with recent work I have done in the USEMP project\u00a0(<a href=\"http:\/\/www.usemp-project.eu\/\" target=\"_blank\">http:\/\/www.usemp-project.eu\/<\/a>). Another important concern is how to\u00a0distinguish which machine learning categorizations should be considered\u00a0\u2018good\u2019 and legitimate differentiations, and which \u2018bad\u2019 discriminations (in\u00a0the sense that they are either illegitimate, or at least undesirable from an\u00a0ethical perspective). Looking at current privacy and antidiscrimination law,\u00a0I argue that the existing legal framework might need to be extended.\u00a0 In\u00a0discussing the conundrums of transparency and differentiation\/discrimination\u00a0in relation to machine learning algorithms, I\u2019ll pay some specific attention\u00a0to the implications of the new General Data Protection Regulation.<\/p>\n<p><em>The slides from the presentation can be downloaded here:\u00a0<a href=\"https:\/\/blogit.itu.dk\/ethos\/wp-content\/uploads\/sites\/14\/2016\/05\/Gradual-equality-itu-26-MAY-2016_v1.3.pdf\" rel=\"\">Gradual equality &#8211; itu 26 MAY 2016_v1.3<\/a><\/em><\/p>\n<p><em style=\"line-height: 1.5\">Sign up for the talk is not required.\u00a0<\/em><\/p>\n<p><strong>Speakers:<\/strong><\/p>\n<p>Katja de Vries is a legal researcher and philosopher of technology\u00a0affiliated to the Institute for Computing and Information Sciences (iCIS) at\u00a0the Radboud Universiteit Nijmegen (the Netherlands) and the Centre for Law,\u00a0Science, Technology, and Society (LSTS, Vrije Universiteit Brussel,\u00a0Belgium). Currently she is working on the USEMP\u00a0(<a href=\"http:\/\/www.usemp-project.eu\/\" target=\"_blank\">http:\/\/www.usemp-project.eu\/<\/a>) project which will result in a transparency\u00a0tool that shows users of social networks which (commercially interesting)\u00a0information can be derived from their data (http:\/\/databait.eu ).\u00a0 In a few\u00a0months from now Katja de Vries will defend her PhD thesis (\u2018Machine\u00a0learning\/Informational fundamental rights. Reconciling two Baroque practices\u00a0of making sameness with a \u00a0overnmentality of proportionality\u2019).\u00a0 Her PhD\u00a0research looks at how machine learning, when it is used to make sense of\u00a0human behavior and characteristics, can lead to infringements in terms of\u00a0privacy, data protection and antidiscrimination law.\u00a0 De Vries has been a\u00a0member of the European \u201cLiving in Surveillance Societies\u201d-network, and has\u00a0worked on the FIDIS (Future of Identity in the Information Society) and SIAM\u00a0(Security Impact Assessment Measure &#8211; A decision support system for security\u00a0technology investments) projects. She publishes on a wide range of legal and\u00a0philosophical topics and has co-edited \u2018Privacy, Due Process and the\u00a0Computational Turn\u2019 (Routledge, 2013). De Vries studied at Sciences Po in\u00a0Paris, obtained three masters degrees with distinction at Leiden University\u00a0(Civil Law, Cognitive Psychology and Philosophy) and graduated at Oxford\u00a0University (Magister Juris).<\/p>\n<p><strong>Details:\u00a0<\/strong><br \/>\n<strong>Time:\u00a0<\/strong>May\u00a026 2016, 12:00-14:00<strong><br \/>\n<strong>Place:\u00a0<\/strong><\/strong>Auditorium 3<br \/>\nJoin the Facebook event\u00a0<a href=\"https:\/\/www.facebook.com\/events\/498800013578169\/\" target=\"_blank\">here<\/a>.<br \/>\nThe event will be in English.<strong>\u00a0<\/strong><\/p>\n<p><strong>ITU address:<\/strong><strong><br \/>\n<\/strong>IT-University of Copenhagen<br \/>\nRued Langgaardsvej 7<br \/>\nDK-2300 Copenhagen S<\/p>\n","protected":false},"excerpt":{"rendered":"<p>What do machine learning algorithms see when they look at us? Some concerns\u00a0about the transparency and discriminatory effects of profiling based on\u00a0machine learning. In this presentation I will show how [&hellip;]<\/p>\n","protected":false},"author":43,"featured_media":1717,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_et_pb_use_builder":"","_et_pb_old_content":"","ngg_post_thumbnail":0,"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[1,18],"tags":[21,98,88,97,99,45,48,44],"class_list":["post-1714","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news","category-teaching","tag-event","tag-katja-de-vries","tag-law","tag-machine-learning","tag-philosophy","tag-publicethos","tag-sts","tag-teaching"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"https:\/\/i0.wp.com\/blogit.itu.dk\/ethoslab\/wp-content\/uploads\/sites\/92\/2016\/05\/robot-507811_1920-02-01.jpg?fit=5333%2C4500&ssl=1","jetpack_sharing_enabled":true,"jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/blogit.itu.dk\/ethoslab\/wp-json\/wp\/v2\/posts\/1714","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blogit.itu.dk\/ethoslab\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogit.itu.dk\/ethoslab\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogit.itu.dk\/ethoslab\/wp-json\/wp\/v2\/users\/43"}],"replies":[{"embeddable":true,"href":"https:\/\/blogit.itu.dk\/ethoslab\/wp-json\/wp\/v2\/comments?post=1714"}],"version-history":[{"count":9,"href":"https:\/\/blogit.itu.dk\/ethoslab\/wp-json\/wp\/v2\/posts\/1714\/revisions"}],"predecessor-version":[{"id":1751,"href":"https:\/\/blogit.itu.dk\/ethoslab\/wp-json\/wp\/v2\/posts\/1714\/revisions\/1751"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blogit.itu.dk\/ethoslab\/wp-json\/wp\/v2\/media\/1717"}],"wp:attachment":[{"href":"https:\/\/blogit.itu.dk\/ethoslab\/wp-json\/wp\/v2\/media?parent=1714"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogit.itu.dk\/ethoslab\/wp-json\/wp\/v2\/categories?post=1714"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogit.itu.dk\/ethoslab\/wp-json\/wp\/v2\/tags?post=1714"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}