BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//IEEE WIE - ECPv5.12.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:IEEE WIE
X-ORIGINAL-URL:https://wie.ieee.org
X-WR-CALDESC:Events for IEEE WIE
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:UTC
BEGIN:STANDARD
TZOFFSETFROM:+0000
TZOFFSETTO:+0000
TZNAME:UTC
DTSTART:20210101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=UTC:20211109T170000
DTEND;TZID=UTC:20211109T183000
DTSTAMP:20220123T032729
CREATED:20211028T094630Z
LAST-MODIFIED:20211109T110121Z
UID:13733-1636477200-1636482600@wie.ieee.org
SUMMARY:Algebra Review: How does one best think about all of these numbers
DESCRIPTION:— Prerequisites — You do not need to have attended the earlier talks. If you know zero math and zero machine learning\, then this talk is for you. Jeff will do his best to explain fairly hard mathematics to you. If you know a bunch of math and/or a bunch machine learning\, then these talks are for you. Jeff tries to spin the ideas in new ways. — Longer Abstract — An input data item\, eg a image of a cat\, is just a large tuple of real values. As such it can be thought as a point in some high dimensional vector space. Whether the image is of a cat or a dog partitions this vector space into regions. Classifying your image amounts to knowing which region the corresponding point is in. The dot product of two vectors tell us: whether our data scaled by coefficients meets a threshold; how much two lists of properties correlate; the cosine of the angle between to directions; and which side of a hyperplane your points is on. A novice reading a machine learning paper might not get that many of the symbols are not real numbers but are matrices. Hence the product of two such symbols is matrix multiplication. Computing the output of your current neural network on each of your training data items amounts to an alternation of such a matrix multiplications and of some non-linear rounding of your numbers to be closer to being 0-1 valued. Similarly\, back propagation computes the direction of steepest decent using a similar alternation\, except backwards. The matrix way of thinking about a neural network also helps us understand how a neural network effectively performs a sequence linear and non-linear transformations changing the representation of our input until the representation is one for which the answer can be determined based which side of a hyperplane your point is on. Though people say that it is “obvious”\, it was never clear to me which direction to head to get the steepest decent. Slides Covered: http://www.eecs.yorku.ca/~jeff/courses/machine-learning /Machine_Learning_Made_Easy.pptx – Linear Regression\, Linear Separator – Neural Networks – Abstract Representations – Matrix Multiplication – Example – Vectors – Back Propagation – Sigmoid Speaker(s): Prof. Jeff Edmonds\, Virtual: https://events.vtools.ieee.org/m/287446
URL:https://wie.ieee.org/tec/algebra-review-how-does-one-best-think-about-all-of-these-numbers/
LOCATION:Virtual: https://events.vtools.ieee.org/m/287446
CATEGORIES:WIE Affinity Group
END:VEVENT
END:VCALENDAR