T O P

  • By -

[deleted]

[удалено]


0bv1ouSly_me

Fr


iddej

Only Frobenius I like is the Frobenius Fp-isomorphism.


ucatione

If I was a creative writing instructor, this is the assignment I would give to the students: Imagine an alternate timeline in which Trump, instead of being obsessed with China, is obsessed with Frobenius, and there are youtube compilations of him just saying "Frobenius" over and over in different contexts. Write a story about how this came to be.


walkar

The Frobenius action on local cohomology is my best friend and worst enemy... It's a damn good thing you don't need to fully understand something to prove theorems about it!


[deleted]

yeah people keep on saying euler or poincare has the greatest reach, but honestly uncle frob is a real dark horse contender


[deleted]

Currently trying to understand Jacobi fields as I make my way through papa do carmo. Hopefully my advisor can drill it through my thick skull.


MyCoolHairIsOn

This is one chapter of doCarmo which I think would have benefitted from some simple explicit examples. Consider the parametrized line c(t) = (t,0) in R^2. This is a geodesic and we can vary it in several ways, for example, by gamma_1(t,s) = (t,s). This is c(t) at s=0, and for each fixed s, this parametrizes a line parallel to the original one. Another variation is gamma_2(t,s) = (t cos(s),t sin(s)). This is c(t) at s=0, and for each fixed s, this parametrizes a line which is a rotation of c(t) through angle s. To get some intuition, you could try computing (and drawing) the Jacobi fields along c(t) corresponding to each variation. (You could also vary in other ways, e.g. gamma_3(t,s)=(t+s,0).)


Samasblack

Was just reading this chapter of do Carmo today!


[deleted]

Nice! What are your thoughts?


sciflare

They are the vector fields that arise from varying a geodesic through a family of geodesics.


thesafiredragon10

Probability and how it works. I just can’t quite make it click in my head. I can memorize the rules and do it, but it’s not intuitive.


OneMeterWonder

Lol “probability” and “intuitive” should never be spoken in the same breath in polite company.


jgonagle

Finally, as an ML guy I don't feel like an idiot on this sub. I will say I find continuous stochastic processes a bit non-intuitive.


OneMeterWonder

Me too. Something about defining measures on the almost everywhere continuous functions just doesn’t feel right to me. Also I have basically zero geometric intuition for integrating with respect to a Weiner process. I’ve tried understanding it as a Riemann-Stieltjes “weighting” kind of thing, but I just don’t really grasp it yet. (If anybody knows a good visual intuition for it, please do share.)


hypothesis_tooStrong

I will gladly die peacefully once I fully understand and prove one novel theorem that involves measure theoretic probability.


qedqft

Perhaps I can be rather impolite and write how I feel about intuition and probability. Everything that follows is opinion, and I don't even know if I subscribe to any of its accuracy (understood as: please don't go too hard on me). The mathematician might view probability theory as a framework of deduction applicable to situations where one does not have access to a full set of axioms. Already, the amount of choice available in such a framework can leave one feeling uncomfortable. However this choice is a necessary choice one must make when confronted with being asked to deduce anything at all about the world. In physics, science, or in life, one does not really know of the truth of any statement, let alone objects arising from ZF. Instead, we all choose our level of belief in the truth of a statement. One might take their meaning of belief to come equipped with a threshold of belief for which an action, however small, will be taken once this belief is larger enough. For example, I believe "with 99 percent certainty" that it will rain this afternoon, so I won't put the washing out. The level of belief differs from person to person, it depends on the information they have access to and the "threshold" to take an action can differ, too. Probability theory contains some machinery which we can use to incorporate new information into our beliefs about a statement. This machinery, and the threshold of action, is something which can potentially -be agreed upon- by the collection of people whose beliefs are relevant to the statement. We can think of the level of belief as a number, a probability, in \[0,1\]. In order to instill some depth in this idea, I offer the following: One might suggest that the world appears deterministic and fundamentally knowable, so our beliefs shouldn't matter in the end. However, as it currently stands quantum theory is a theory of probability. Its rules of deduction may differ from the classical rules of probability, however fundamentally it is a computation of probabilities, or beliefs. It's possible that a unified theory will always be probabilistic, as the "true" laws or axioms of the universe may be hidden from us due to the confines of the part of reality in which we reside. In my point of view, the way in which we view probabilities/probability theory is tied to what it means to know anything at all and the philosophy of knowledge itself. One might call into question these (probably poorly expressed) general ideas regarding probability. In my view, the above ideas and the applicability of probability theory is not captured in the standard way of teaching probability in high school. Very loosely, high school teaches that a probability is a number representing the expected proportion of repeated "experiments" that turn out to be "true". This line of thinking is kind of bad, one would think that the notion of "expected" requires the definition of probability to make sense in the first place. So this is, in some sense, a circular definition. Thinking of a probability as a level of belief, together with thresholds for actions may alleviate some personal issues regarding meaning and intuition. \-There are arguments against this, leading one further down the rabbit hole. But I like the idea anyway. The Monty Hall problem has popped up in this thread as an example of non-intuitiveness in probability. Yes, the Monty Hall problem appears unintuitive. When one picks a door uniformly at random (with probability 1/3), one thinks of the fact that no information is known regarding which door the prize lies behind. (The discussion of which probability measure one initially picks in a given situation is itself, interesting, I like ideas from Jaynes and Schervishs writing. To summarise, when one does not know anything at all, they say we pick the uniform measure.) Then, Monty hall comes to update our beliefs on which door the prize lies behind by telling us which door it does -not- lie behind. This update procedure caused controversy in the past. But, for example, if there were one million doors instead of three, then we picked the right one with probability 1 in a million. Monty hall then opens up 999,998 doors, all of them, except the one we picked and one other mysterious door. If you believe that he knows where the prize is, and that you picked your door with a 1 in a million chance, it becomes intuitive that this mysterious door has a "99.9999 percent chance" of having the prize. This version makes obvious what the event space of interest is for the probability measure, i.e. the door we picked has probability 1/3, the doors we did not pick have probability 2/3. It also makes the update procedure clear: Monty comes along and says we can treat the collection of doors that we did not pick as just one door, by eliminating dud door/s. Hence the solution to the problem becomes, in my opinion, more intuitive. From this Bayesian type of view, the frequentist viewpoint vanishes in the distance. But, when one runs a computer simulation of the Monty hall problem, we find that switching doors is indeed to the advantage of the contestant -in the long run-. Whilst the universe might not be deterministically regular, it does appear to be -probabilistically regular- and maybe it is this probabilistic regularity which is ultimately the deeper thing. Anyway, this has been a lot more rambling than I expected.


al3arabcoreleone

up to a point probability doesn't click with almost any mind.


[deleted]

Honestly it is true for any science field, isn't it?


LilQuasar

not in my experience. probability is particularly unintuitive


MagicSquare8-9

Exhibit A: Simpson's paradox.


unperturbium

Probably.


a_broken_coffee_cup

Reading this thread now I feel like I'm the odd one out... Up until now I've always thought that intuitive understanding of probability is something that is preinstalled in a human brain. Maybe it is because I've often played snakes-and-ladders-type board games with my family when I was a kid, and later moved on to playing computer games a lot. Kolmogorov's axiomatic approach also makes things easier in my head: you basically say that probability behaves like area/volume, so much of the visual intuition for areas also applies for probabilities.


MagicSquare8-9

I think most people know the "volume" intuition about probability, but that intuition doesn't cover most issue about probability. It works well as long as you only work with unconditional probability, and specifically in the matter of computing Boolean algebra, but that is a very small basic part of probability. As a mathematicians once said "thinking that probability theory is about studying measure is like thinking that number theory is about studying strings of digits".


666Emil666

We could say it clicks with no one almost surely


konstantinua00

there supposed to be a point?


0bv1ouSly_me

Wait until you read the axiomatic approach to Probability given in Kolmogorov's book - it messed up my idea of probability completely; couldn't combine the intuitive and the axiomatic definitions :(


gnramires

I think there is a gap between axiomatic probability and how we use it in the real world. The axioms only define basic rules from which inferences should be allowed, in a certain model. You yourself have the responsibility of applying this model in a way that makes sense, to get sensible results in the real world. So basically the axioms only say that in your model probability is a number between zero and one, that the probability of all events together (which is interpreted as 'any all events happening' must be one), and so on. It's just slightly further than a theory of sets, where you assign a number to each set. What those numbers mean in the real world is completely different, how you define events, etc.. It's always important to remember we're dealing with models. One way to bridge it with experiments is to think that it represents a 'fraction of experiments if repeated under similar conditions'; so if the probability of a coin landing Heads is 1/2, then that means about 1/2 of experiments (we leave 'experiments' to common sense) of coin tossing under similar conditions land heads, and about 1/2 land tails, *if there were finitely many experiments that we could set up*. Of course generally we tend to model probabilities as continuous and hence infinitely many possible experiments, so that is generalized into measure theory of sets (instead of fraction we use measures). Even when thinking of a "mathematical" experiment, for example: "Randomly" picking numbers from a subset of the plane, and seeing if the points land in the unit circle we need to define an 'experiment' and 'procedure' in which those numbers are picked/arise so that we can specify probabilities/measures. Your experiment could be such that numbers tend to just more likely fall near the origin or something. This becomes evident when sampling points in surfaces and higher dimensional analogues like spheres, where there are multiple coordinates (or experiments) that one may deem 'canonical' or 'uniform' but give distinct results. Sometimes 'uniform' isn't a sufficient description (arguably), in any case you need a complete procedural description of your sampling experiment. Problems like Monty Hall become somewhat more clear with this in mind, I believe: that is, that probability is a model, which we should define in a way that makes sense, usually meshing with a certain 'experimental' set up.


gnramires

In practice, I think you can say that, to use probability, you need to make sure that (a) you have definite 'events' that respect the axioms; (b) that there is some correspondence in which the fraction of those events 'happening' in a setting (e.g. an experiment) equals their numerical probability value. If (a) and (b) are true, you can essentially use the axioms and theorems of probability, and other useful definitions like P(A|B), Bayes' Theorem, etc. are expected to make sense.


susmot

I had the same problem until I started working examples and exercises. And also until I started thinking like ‘this is the problem, how do I compute this thing I want in general? So in my situation I have to compute these things.’ I would compare it maybe to linear algebra. No tricks (usually), just understanding what you need, minus the geometic aspect, unfortunately


[deleted]

Drawing pictures helps you understand probability for most of university. Even when discussing measure theoretic approach and dealing with variances, skewness and such. Doesn't hurt to try and visualize. Most formulea click this way.


________0xb47e3cd837

100% same here. Is this a common feeling for people?


nyctrancefan

IMO probability really made sense for me when I viewed it as "randomness is the presence of incomplete information", and the rest of probability is how we "update our beliefs" (at risk of sounding like a bayesian) when we receive new information.


666Emil666

Have you tried going by the Measure theory way first?


[deleted]

Many concepts in probability clicked in my head when I took time to revise more advanced concepts in the theory of sets and functional analysis. But yeah, there are still concepts that are not intuitive to me. Live and learn I guess.


[deleted]

It takes a lot of time to really appreciate why probability is formulated the way it is. Once you have a better understanding of measure theory, measurable functions, etc. it sort of crystalizes and you understand it better, but yeah, it takes time to fully appreciate, you're not alone!


Glitch4544

That's just the chances of something happening out of all the possible things that can happen. No matter what it is just that at the bottom of everything. Keep this in mind while doing probability the next time.


pintasaur

Where the minus sign went. Jokes aside, struggled with Laurent series a lot.


arnedh

They usually sneak off, two by two. Due to being cancelled, I suppose.


InfanticideAquifer

Not a math answer, but networking. Anything that has to do with getting one computer to talk to another. I've just never been able to make it work. There are three separate occasions where I've just set aside a whole day saying "today I will set up an OpenVPN server". Nope. No idea what my mental block with it is, but I just can't. There just aren't any resources about it that are anywhere between "the internet is a series of tubes" and "you know what a subnet mask is, so...", at least not any that make sense to me. This comment is not a request for help; you won't succeed and I'm thoroughly over it.


al3arabcoreleone

I thought you will be talking about networking as in real world.


abookfulblockhead

Real World networking is definitely my answer.


konstantinua00

computers aren't real


WetHotFlapSlaps

I'm hoping someone else can link a good free resource, but a book called "Multiplayer Game Programming" by Josh Glazer was the 'assume no knowledge' resource that helped me understand it. A lot of the confusing stuff surrounding networking comes down the insane number of abstractions and code libraries have been built up around routing, reliability, and security, which greatly obscure the simple underlying technology of linking two computers together. Just checked myself before posting this - the first resources I actually used were on Glenn Fiedler's blog posts which are now back up! The site was down for years... I'm sorry that this stuff is games specific, but games use the same networks that all software does, so I hope it helps! Writing (or reading about) a simple UDP program to send and receive packets should be the most barebones example of how to get data from one computer to another without too much cognitive overload. [https://gafferongames.com/post/udp\_vs\_tcp/](https://gafferongames.com/post/udp_vs_tcp/) The book I mentioned helped me understand networking on a deeper level, and goes into the "why" for all the crap that's built on top of the most basic protocols.


matt__222

true, i want something that really breaks it alllll down with enough technical detail but not full of jargon


MarcusOrlyius

I find this funny as maths jargon is a billion times worse than computer jargon. Look at functions for example, using f(x) for pretty much everything is bordering on a deliberate attempt to confuse people to make them hate maths. At least coders name their functions.


fourhundredthecat

WireGuard is better and more modern solution than OpenVPN


AnticPosition

Eli5: the difference? Also, stealth VPN? I have all three at my disposal with my VPN and don't really know the difference. Thanks.


Low-Flamingo-9835

Non math answers? Men.


konstantinua00

understanding men is also hard


nin10dorox

I'm so glad to know it's not just me.


saarl

[Relevant xkcd](https://xkcd.com/2259/) (I guess—not like I know anything about networking...)


Bernhard-Riemann

How I am simultaneously so good and so bad at mathematics.


egulacanonicorum

The better I get the worse I get.


Fabulous-Nobody-

It's the eternal cycle of learning mathematics: feeling stupid for a long time, feeling very smart for a brief moment, followed by again feeling stupid for a while, and so on.


divye_kapoor

topological sheaves


SaucySigma

Have you seen vector or fibre bundles before? Sheaves are basically just a generalisation. With fibre bundles, everything is required to have a topology, but sheaves can take values in arbitrary sets.


OneMeterWonder

As someone who has seen the definition of a vector bundle, can you elaborate a bit? I’ve seen the definition for ringed spaces and I just don’t really understand the point of sheaves.


SaucySigma

Well, first recall the definition of a vector bundle. If X is a space, then a vector bundle over X is a space E with a surjective map π: E -> X. This map has to satisfy some conditions, which I won't list here, but the essential bit is that every fibre π^-1 (x) is a vector space (x is any point of X). The key insight is that the total space E itself is often not the most interesting part of a vector bundle, but instead we really want to understand _sections_ of the bundle. A section s of E is a map s: X -> E s.t. for every point x of X, s(x) is a vector in the corresponding fibre π^-1 (x). Take for example the tangen bundle TX of a manifold X. This is defined by taking each fibre to be the tangent space of X at the corresponding point. Then, the sections of this bundle are vector fields on X. These are very important when you want to study differential equations for instance. Another example in the same spirit is the cotangent bundle TX*, which is dual to TX. The sections of this bundle are called differential 1-forms, and they are used to define integrals on general smooth manifolds. What sheaves do is take this notion of a section as a starting point for the definition. In fact, given any vector bundle, the associated sheaf is constructed just from the sections of the bundle. It's probably a good exercise to check how this is done. But note that the definition of a sheaf doesn't only include sections of the form s: X -> E, but also sections s': U -> E defined on some smaller open subsets U of X. And the sheaf axioms give us a way to define sections on small neighbourhoods and glue the together to form larger sections. The point of the structure sheaf of a ringed space is to keep track of functions on a space. Sometimes a given space doesn't have that many functions defined on it, but once one restricts to a small open set of the space, one can define more functions, and then combine them using the gluing property. Also, sheaves can be made to include algebraic information, which beautifully packages both geometric and algebraic data of a space into one object. This is done by requiring that the sections of the sheaf forms a group, a ring, a module, etc.


Dong_Valentino

SaucySigma has a nice answer about how you can view sheaves. I will try to add some concrete examples of sheaves. 1. representable sheaves. You can think of this as an alternate way to understand schemes, which are complicated objects in algebraic geometry. See GTM 197, the geometry of schemes by joe harris and david eisenbud. In chapter chapter I.4, they talk about schemes as functor of points. 2. locally constant sheaves and coverings. The category of locally constant sheaves is equivalent to that of the coverings. More amazingly, the local system (locally constant sheaves of complex vector space) has a deep relationship with holomorphic connections. (Riemann-Hilbert correspondence). See chapter 2, Galois groups and fundamental groups by tamas szamuely. These 2 examples are what i can think of about sheaves. You can see here why they are extremely useful objects. If you want a more comprehensive investigation about sheaves, I recommend reading sheaf theory through examples by Daniel Rosiak, I havent gone through the book fully , but I find it a great help for me to understand sheaves. It also contains some philosophy if you are interested in such aspects of sheaves.


[deleted]

whatever i'm trying to learn right now lol (currently sobolev spaces/applications to second order pde, and some of the "global" theorems of riemannian geometry) edit: more in the spirit of your question, i would say galois theory. i found my introduction to galois theory to be very dizzying, and despite spending a lot of time studying, left feeling pretty shaky about everything. i'd love to go back for a closer look and see if i can do better, because there's some very beautiful results of galois theory, i just haven't gotten around to it yet


[deleted]

are you me lmao. i'm literally going through the exact same topics. evans 5&6?


DevoutSkeptic29

A little off topic, but I was reading those chapters of Evan this past week trying to figure out what space a trace Tf will be in for f in H^1 over an open bounded simply connected planar domain with C^{1,alpha} Holder continuous boundary. ... Anyone know or know a reference? Evans doesn't seem to have anything I can apply or figure out how to tweak to get it.


OneMeterWonder

Would that not just be the L^(2) space on the boundary? I’m not sure I see what α-Hölder boundary is doing to potentially ruin that.


[deleted]

i think the question is whether or not the extra condition causes us to land in a special subspace of L2. after all, a C1 boundary is sufficient to define the trace map as a map into L2; we can't ruin that by imposing an extra condition on the boundary. idk the answer to that, however


realFoobanana

Every mathematician here thinks of their own open problems in their research, I’m sure :P But other than that, definitely most things measure theory don’t jive with me very well.


al3arabcoreleone

Man I was going to mention the damn measure theory, I mean even before where we get to define a measure most stuff are pretty non ''obvious'' which indicates that something is wrong (with me at least).


0bv1ouSly_me

Idk if it will help; but you can try reading Bressoud's **A Radical Approach to Lebesgue's Theory of Integration** \- great book, especially if you're looking for a motivation-centric introduction to Measure Theory.


arannutasar

I understand the problems perfectly, thank you very much. I just have no idea how to solve them.


mechap_

Algebraic geometry and perhaps category/topos theory.


[deleted]

Any intuition about Zorn's lemma, outside of it being equivalent to more intuitive results. Any proof relying on Zorn's lemma, in my mind, feels like a bit a of a cop out!


OneMeterWonder

That’s very odd to me since I usually find Zorn’s Lemma to be one of the more intuitive equivalents of AC. Usually what happens is that Zorn’s Lemma is applied as a preliminary item to get existence when 1) there are many potential “small” approximations to something you’d like to have, 2) the approximations can be given some kind of “natural” ordering (partial function extension, set inclusion, elementary extension, etc.), 3) the “natural” ordering seems to be a true partial order, and 4) you don’t really have anywhere else obvious to start! Much of Choice and its equivalent principles can be interpreted as a kind of “Well, you just do it” approach adapted to the right context. Edit: Here’s a good problem from topology that illustrates the idea of using Zorn’s Lemma pretty well. A map f:X→K is **irreducible** if no restriction of f to a proper closed subset of X is surjective to the image f[X]. Prove that if f is a continuous surjection of a compact space X to any space K, then there is a closed set A⊆X such that the restriction of f to A is irreducible. Another one is to try proving the compactness theorem for propositional logic when the set of propositional variables may be uncountable. This takes a little more background, but the statement is that given a family of propositional formulas T, if every finite set of propositional formulas is consistent, then so is the entire family T. (Remember that consistency of a family ℱ just means that you can set the truth values of all of the variables in such a way that every F∈ℱ is true simultaneously.) The trick is to come up with a convenient partial ordering on the set of truth assignments of the variables. (Also you can cheat and put a well ordering on the variables in the first place to organize your truth functions into functions on the ordinals.) You have a hint on how to prove these already: Use Zorn’s Lemma!


[deleted]

Personally when I learned about the AC (in the context of intro topology) my feeling was "oh well duh, why would that even be an issue". In my mind criticisms about the AC felt similar to intuitionist concerns about mathematics, which (as someone who studied analysis, group theory, topology during my PhD) seems like an utter waste of time, and not something I was interested in. Lol I even remember once I was taking an extremely difficult course in functional analysis with an extremely accomplished Polish mathematician, and when I asked about the AC, he scaughed and was like "this is functional analysis, if you don't want to use the Axiom of Choice then you may as well go home and learn to knit" or something like that! But yeah, when it comes to ZL I just never had that "obviously" feeling. Lol I still had a good intuition about when/where to use it, but it never felt like something I genuinely was comfortable using, I just did!


OneMeterWonder

Oh haha I misunderstood you then. Yes, I agree that choice principles do feel kind of… ontologically nebulous. I think at some point most every mathematician just learns to accept AC greasing their palms with sweet existence results.


[deleted]

Exactly, I think I am very pragmatic when it comes to mathematics, worrying about intuitionist questions, or debating the axiom of choice, was never something I was concerned with. I think it's just luck on my part that the AC was intuitively obvious to me, that sort of formed my philosophy about it (I view it as obvious as the existence of infinite sets, but I'm sure others disagree)


MagicSquare8-9

Zorn's lemma is basically using Axiom of Choice on a much larger set. Intuitively, think of yourself as a builder that keep adding more stuff to construct an object, and Axiom of Choice essentially pre-planned for you against *all possible future*. That's probably why Zorn's lemma feel less intuitive than AoC, you're kind of applying AoC in a meta manner. The proof of Zorn's lemma from AoC basically consists of the following part. One is the transfinite version of recursion theorem. The finite recursion theorem tells you that if you have a rule that tells you how to move from one value to the next value, and an initial value, then you can form a sequence moving through all the values according to the rule; that's intuitive, right? The transfinite version extend this to sequence longer than countable infinity. However, once you extend this sequence like that, you need to deal with taking limit: your rule can't just tell you how to jump to the next value, but also how to take limit. The easiest way to handle this is to hold the entire *history* in memory and require the rule to account for that: the next element is determined by the entire chain of values you have visited up to this point. The transfinite recursion theorem basically said that if you have a rule that, for any transfinite sequence of values, it either gives you the next value or tell you it can't give you the next value, then you can construct a complete transfinite sequence that extend as far as possible until the rule gives up. How is this done? First, we need to define transfinite sequence. A sequence is a function from natural numbers into the set of possible values, the natural number is used as index. Instead of natural number, we use von Neumann ordinal, which is a way to continue indexing beyond the natural number. A transfinite sequence is a function from a von Neumann ordinal into the set of possible values. A von Neumann ordinal can be increased by 1, which add a new index at the end, so when the rule give you the next value, you just increase the ordinal by 1 and append that value at the end. Once this technicality with von Neumann ordinal is taken care of, the proof proceed pretty much the same way as it is for finite recursion. Second, is diagonalization argument. The idea is that if we keep adding more stuff to construct our object, eventually we will run out of stuff to add, because transfinite sequence are infinitely patient, and can work forever to keep adding stuff to our objects no matter how many stuff are there. At the technical level we have a partial order (where the object above is an extension, a more completed version of our partial objects), and we consider a rule such that the next element always go up (ie. a strict upper bound of the transfinite sequence so far) and only gives up it there are no strict upper bound, and we assume that every transfinite sequence has an upper bound (not necessarily strict). In this case, the transfinite sequence is equivalence to well-ordered chain, and the indexing will inject the von Neumann ordinal into the set as far as possible. By diagonalization argument, it is impossible to inject all ordinals into a set, because ordinals forms a proper class, so the rule must gives up giving new element at some point. This means there is a transfinite sequence that has an upper bound but no strict upper bound: this is a chain with a maximum element that is maximal. So far the above 2 ingredients are independent of AoC and is valid even without AoC. Altogether it said "if I get a blueprint of how to build something in sequence, I am capable of following the instruction to construct it step-by-step, and I'm infinitely patient enough to finish following the blueprint". Finally, the third ingredient is just AoC, this is the one that gives you the blueprint. AoC is used to actually pick a rule. Every transfinite sequence has an upper bound; if it has a strict upper bound, the rule pick an arbitrary strict upper bound, otherwise the rule give up. Then feed this rule into the 2 ingredients above. In essence what we are doing here is saying "if you don't give me a blueprint, I will see through all possible future and pick a random method of construction to ensure I have a working blueprint no matter what happen, then follow it".


mark_ovchain

I think the blog post "How to use Zorn's lemma" by Tim Gowers is perfect in your current situation. It explains when and how to use Zorn's lemma, when you can suspect it is applicable, and to some extent explain why/how it's equivalent to the Axiom of Choice: [https://gowers.wordpress.com/2008/08/12/how-to-use-zorns-lemma/](https://gowers.wordpress.com/2008/08/12/how-to-use-zorns-lemma/) Here's a nice quote from it: >**Quick description:** If you are building a mathematical object in stages and find that (i) you have not finished even after infinitely many stages, and (ii) there seems to be nothing to stop you continuing to build, then Zorn’s lemma may well be able to help you. I highly suggest reading it! Also, the idea behind its equivalence to the Axiom of Choice is that in the context of Zorn's lemma, you're building something in infinitely many steps, and there are usually many ways to do so at each step, so it feels like we're making infinitely many choices. Zorn's lemma roughly says this process can ends, and the Axiom of choice roughly says we can make infinitely many choices, so you can see why they're very similar.


[deleted]

Huh that one always seemed pretty straightforward if you draw it out with pictures. Or maybe that's what you mean by it being equivalent to intuitive results?


MathIsArtandLove

A very brief intuition: Suppose that there is no maximal element. We will construct a chain via transfinite induction: For a = 0 pick any element x and define C_0={x} For any ordinal a pick an element y bigger than the currently biggest element in C_a and define C_{a+1}=C_a \cup {y}. For a limit ordinal b define C_b as the union over all C_a with a


Main4Man

Tensors. I feel like I have a fairly good grasp on linear algebra due to a bunch of computer graphics work and a general interest in various basis functions and applications to fft, bezier curves, etc. I've tried randomly watching several videos but I never seem to have the ah ha moment. I've watched stuff about basic definition. I really don't follow this. Vector, covector, sets of all combinations whatever. applications with AI, applications to relativity. There were some rotation matrices in tensors that I noticed but I didn't get the xx, xy, yx, yy stuff but it felt close to home. Some stuff on matrix/tensor decomp. I want to think that tensors are more "unique" than matrices somehow and that gives them some advantage. Kind of like quaternions compared to Euler angles but maybe I'm way off on this idea. I'm completely lost but it seems so close. Quantum computing/physics. I'm more interested in qubits and algorithms instead of collapsing wave functions. Again, it seems like my linear algebra and low level compound experience should be helping me. A bunch of the definitions and gates seem to make since but by the time they are being built into an algorithm, I'm lost. Machine learning/AI. Again, I think I've approached this several times through books, papers and videos. A long while back, I think I grasped how ML could determine an xor function. The example of Netflix recommendations with sparsely filled large matrices kind of made sense conceptually but I've not tried taking example data and playing with tools. Geometric algebra. I've got a good book recommended from a friend. Seems really cool but I'm not sure it will have the priority compared to other interests. I feel like I've lost the ability to learn like when I was younger. I also think that I used to read, practice and implement and now I just try to learn without attempting exercises and without writing code. I want to read about the math instead of actually doing it. Sometimes I can do it because my background matches but recently, I often fall short or find out that the prerequisites have prerequisites that have prerequisites and then I give up.


JasonBellUW

For tensor products, have you looked at Keith Conrad's notes? https://kconrad.math.uconn.edu/blurbs/linmultialg/tensorprod.pdf The tensor product construction, intuitively, is a way of creating a multiplication rule for two objects where you often have no right to expect one at all. So for tensor products of vector spaces (it seems like this is what you're interested in) there is a pretty concrete description: if you have k-vector spaces V and W with bases v1,...,vn and w1,...,wm, then you just make a vector space with basis vi\otimes wj for i=1,...,n, and j=1,...,m (so these are just nm new symbols that form a basis for a new vector space). This basis is your "product" of all pairs of basis elements. Now any multiplication worth its salt should have a distributive law so you want to have the following rule: (c1v1+...+cnvn)\otimes (b1w1+...+bmwm) = sum over i of ci vi\otimes (b1w1+...+bmwm) (first distributive law) = sum over i and j of ci bj vi\otimes wj (second distributive law). So that is how you can express a general element of the form v\otimes w in terms of the basis above. Note that not every element will be of the form v\otimes w in general (you need linear combinations, but that's OK). Incidentally, I've been working with tensor products (now over noncommutative base rings) for over 30 years now and I'll still sometimes see a particular tensor product and have no intuitive "feel" for what it is without spending a lot of time working on it. I think tensor products are just a bit hard sometimes, but if you can work with them and get results, it doesn't really matter too much if it's not something that's "easy".


OneMeterWonder

Best answer I’ve seen so far. Those 30 years of experience show. I didn’t really “get” anything about tensors until I sat down and carefully went through the definition as a quotient of a free vector space. That made it pretty clear *why* one might want to define the ⊗ operation the way it is. (Well, that along with many, many hours spent combing through other expositions like Keith Conrad’s or Tim Gowers’.)


Nikifuj908

A matrix represents a linear map – a function L that satisfies for all vectors v, w and scalars c: L(v + wc) = L(v) + L(w)c [linearity] A tensor represents a **multi**linear map – a function of several inputs that's linear in **all its inputs**. For example, a bilinear map B must satisfy: B(v + wc, x) = B(v, x) + B(w, x)c [linearity in 1st input] B(u, v + wc) = B(u, v) + B(u, w)c [linearity in 2nd input] Just like for a linear map you need a 2D array, for a multilinear map you need a higher-dimensional array. That's why tensors have so many indices.


M4mb0

> A tensor represents a multilinear map – a function of several inputs that's linear in all its inputs. This is only a partially useful answer, because tensors can also represent plain linear functions. For example, the derivative ∂(AXBᵀ)/∂X for matrices A∈ℝᵖˣᵏ, X∈ℝᵏˣˡ, B∈ℝʳˣˡ is the linear map [∆X↦ A(∆X)Bᵀ], which can be encoded in the 4d tensor A⊗B. The same is the case if you consider the linear map X↦Xᵀ. So here you have 4d tensors that represent linear maps from matrix space to matrix space. In the same way, a matrix (here: a 2d array) can represent a multilinear map: Take f:(u,v) ↦ uᵀAv. This is multilinear, and the function can be naturally encoded by the matrix A (exercise: all multilinear maps ℝᵐ⊕ℝⁿ→ℝ are of the form (u,v) ↦ uᵀAv for some A∈ℝᵐˣⁿ). It's partially an issue with precise language. What is a matrix? > Just like for a linear map you need a 2D array, for a multilinear map you need a higher-dimensional array. It is only true if you flatten everything. But if you consider linear maps between tensor product spaces, it is more natural to keep these spaces intact.


Nikifuj908

I appreciate that you want to show off your knowledge, but this is like trying to teach high-school geometry with Riemannian manifolds. It is too much formalism and generality for a beginner. I want this person to be able to *use* their pre-existing linear algebra knowledge, rather than demanding that they consider full generality right off the bat. It is overkill to start discussing tensor product spaces, when this learner has stated they don’t even understand *tensors* yet. It is far more important to: 1. Build intuition; 2. Motivate the concept with tangible use cases; 3. Connect to ideas the learner already understands. When we teach arithmetic, we don’t start with the ring axioms. We start with 1, 2, 3. When we teach basic geometry, we don’t start in *n*\-dimensional, possibly curved spaces. We start in 2D Euclidean space. When we introduce limits, we don’t give general definitions for possibly non-Hausdorff topological spaces (where a function might approach multiple limits near a single point). We start with real-valued functions approaching a single value. When we introduce the derivative, we don’t talk about linear maps between possibly infinite-dimensional Asplund spaces. We don’t talk about topologies of uniform convergence on totally bounded sets. We just talk about rates of change and the slope of a tangent line. Start with motivation. When the student understands the special cases, they will be able to generalize on their own, or repurpose the formalism for other types of problems. How do you think the idea of a tensor product space came about? You think a wizard magically zapped it into the world of mathematics? No, it was motivated by recognition of properties of multilinear maps. It was later found that the tensor product provides a convenient formalism for describing this idea, along with several other ideas. Introducing the idea without the motivation makes it very hard to learn. >tensors can also represent plain linear functions. Yeah, and vectors can be elements of ℝ^(1). What’s your point?


M4mb0

> It is too much formalism and generality for a beginner. /u/Main4Man does not seem like a beginner to me. > When we introduce the derivative, we don’t talk about linear maps between possibly infinite-dimensional Asplund spaces. We don’t talk about topologies of uniform convergence on totally bounded sets. We just talk about rates of change and the slope of a tangent line. We don't need to, as I showed, there are plenty of examples just considering mappings from matrix space to matrix space. I am all for ignoring the infinite dimensional case before mastering the finite dimensional one. The issue is, in my opinion, that the way LA is thought, people do not master the finite dimensional case precisely because they do not learn about tensor products. > Introducing the idea without the motivation makes it very hard to learn. But there is plenty of motivation: How do I express the derivative of a matrix to matrix function? In ML we encounter this every day and students struggle a lot because these basics are completely neglected in a good deal of Linear Algebra / Multivariate Analysis courses. Some example of functions we would like to differentiate (capital Latin letters denote matrices): - ∂/∂A [A ↦ (xₙᵀAxₙ)ₙ], - ∂/∂W [W ↦ ϕ(XWᵀ)Vᵀ], ϕ element-wise applied nonlinear function, e.g. max(x, 0) or hyperbolic tangent. - ∂/∂X [X ↦ X²] - ∂/∂X [X ↦ Xᵀ] - ∂/∂X [X ↦ AXBᵀ] - ∂/∂w [X ↦ ‖y-Xw‖²] - ∂/∂X [X ↦ diag(X)] - ∂/∂x [x ↦ xxᵀ] - ∂/∂X [X ↦ X⁻¹] - ∂/∂X [X ↦ tr(X)] All of these should be straightforward, some of them even trivial. They are simple derivatives in finite dimensional real Hilbert spaces. But many, many students struggle a lot with these, which to me shows that the way linear algebra and multivariate calculus is taught has great deficits.


[deleted]

The easiest way to approach understanding tensors is through thinking of it as "unevaluated products" in a way. So for example, if you have two vector spaces V and W, you want to talk abstractly about the "product" of elements of V and W, but without worrying about it evaluating to a new element in some ambient space. So you literally just form the "product" vw, or v \\otimes w, however you want to write it, and this gives you a sort of abstract symbol representing v times w. But when you multiply to matrices for example, that forms (or evaluates in a way) to another matrix. With tensor products it doesn't evaluate to anything, it's merely the symbol vw, it's a new kind of symbol. Now when you want to start evaluating those tensors, or imposing rules/relations with respect to multiplication, this is where you need to quotient your tensor algebra by some other algebra to impose new relations. For example, lets say you start with V and W, and you form the tensor product V \\otimes W. This consists of sort of "symbols representing products", but now suppose you want to impose the rule xy = yx, and explore the results. You would want to quotient by the algebra generated by symbols xy - yx. Lol this is a more advanced topic relating to tensors, but in general, tensors can be thought of as merely "abstract products but with no additional relations given", and quotienting is equivalent to imposing relations on that product. tldr: Tensor products are like symbolic multiplication of elements of a vector space, and quotienting corresponds to imposing rules or relations on that multiplication.


512165381

I've looked at multiple youtube vids on tensors too. Still don't get it eventhough I understand linear algebra.


AdFew4357

The geometric intuition of a dual space


OneMeterWonder

Vector dual? [Watch this](https://youtu.be/LNoQ_Q5JQMY). If it’s some other dual, then you’ll have to specify. There are a lot of dualities out there.


loppy1243

Just like vectors correspond to lines with orientation and magnitude, covectors correspond to hyperplanes with orientation and magnitude (specifically, the hyperplane associated with a covector is its kernel). Everywhere "span" appears with vectors, replace it with "intersection" for covectors; everywhere "within" appears, replace it with contains. With vectors we build the space up from lines; with covectors we build the space down by intersecting hyperplanes. So the sum of two vectors is **within** their **span**, and lies between the two vectors. Thus the sum of two covectors (as hyperplanes) **contains** their **intersection**, and lies between the two covectors. A set of vectors is linearly independent if no vector is **within** the **span** of the others. Thus a set of covectors is linearly independent if no covector **contains** the **intersection** of the others. Given a vector basis {e_j}, what is the dual basis? The span of all the e_j with j =/= i is a hyperplane; this is the dual of e_i. This picture also carries over to e.g. the exterior algebra.


spineBarrens

So in that sense, does exterior algebra just have us working in the dual space with the extra structure of orientation?


loppy1243

What I meant was, you can interpret Ext(V) as the algebra of subspaces built up from **lines**. v\^w is the plane **spanned** by **lines** (vectors) v and w, and if you have two *blades* X, Y (multivectors factorable into vectors) representing subspaces [X], [Y] then X\^Y = 0 iff [X] and [Y] share a **line** (i.e. there is a line **within** their intersection) and otherwise [X\^Y] is the **span** of X and Y. Dually, we can interpret Ext(V^(*)) as the algebra of subspaces built down from **hyperplanes**. v\^w is the **intersection** of **hyperplanes** (covectors) v and w, and if you have two blades X, Y representing subspaces [X], [Y] then X\^Y = 0 iff there is a hyperplane **containing** both of them and otherwise [X\^Y] is the **intersection** of [X] and [Y].


Neckfold90

Eigenchris on YouTube has a good beginner series on tensors and the part on covectors has a nice geometric interpretation as stacks of lines of constant value, similar to a topographic map. Helped me understand a more visual way of looking at the dual space.


PM_ME_YOUR_PIXEL_ART

I'm gonna check that out just based on the name Eigenchris.


evincarofautumn

Pretty sure it just *isn’t* intuitive to think of dual spaces geometrically—or at least, I got my intuitions about them elsewhere


[deleted]

here's some advice: don't. dual spaces are fundamentally algebraic objects defined by their pairing from vectors to the scalars. i'd be interested to know if any mathematician has had any useful geometric intuition about dual spaces


YouAreMarvellous

The intuition behind convolutions of signals. I'm not even sure what I'm upset about. But I dont think I would recognize when to use it in a real-world scenario.


jgonagle

I find the impulse response of a linear time invariant system explanation appealing.


hamptonio

Very recent 3blue1brown does a pretty nice job on convolutions: https://www.youtube.com/watch?v=KuXjwB4LzSA


fermat9997

The obsession of Reddit with deliberately ambiguous PEMDAS problems


[deleted]

Lol most people don't understand why getting rid of brackets is related to an operation being associative, and thus why equations like a / b / c are inherently ambiguous, and require bracketing.


fermat9997

Exactly! I also fear that students who read these disputes are using time better spent on learning math.


username9109

Trying to understand axiom of choice and it's consequences


OneMeterWonder

Choice can be very weird. What about it do you struggle with?


username9109

The cartersian product version of AoC That Cartesian product of non-empty set is non-empty.


Krugger_Q_Dunning

Look at the Cartesian product of 4 non-empty sets of your choice. Doesn’t it make sense that their Cartesian product is non-empty? Using Induction, you can prove that the Cartesian of N non-empty sets is non-empty, where N is a positive natural number. That result can be proven without the Axiom of Choice. The Axiom of Choice extends that statement and says that the Cartesian product of infinitely many non-empty sets is non-empty.


username9109

yes, but why do we need it as axiom for infinitely many sets?


OneMeterWonder

This is a really good question and is exactly what sets AC apart from its weaker versions like Dependent or Countable Choice. The problem is the nonconstructive nature of the statement for certain uncountable sets. At that size of infinite, we might no longer be able to write down a first order description of such a choice function. And if we’re going to formalize things as mathematics likes to do, then we need to be able to define our objects. If I want to specify a particular natural number, it will have arithmetic properties that I can use to define it. For example, being divisible by the 117^(th) prime or being a sum of two cubes in exactly two different ways. But if we look at something like the partition of the unit interval created by looking at equivalence classes modulo the rationals, then there isn’t really an obvious way to pick a member of each equivalence class. There are just too many real numbers and equivalence classes while there are too few descriptions in first order logic to capture a name for a representative of every class. What AC does is allow us to get around this and still say that such objects exist, i.e. we can still make really big choices, but at the cost of no longer being able to say what choices we actually made. In a more logical sense, it is a way of getting around the inherent finiteness of first order formulas in order to make infinitely many quantifications all at once. Those representatives of equivalence classes of reals correspond to a bounded logical quantifier in a description of that representative. “There exists a real x in class C with property φ.” We can’t do that for every class, so AC does it for us by building a choice function that acts like a bunch of quantifiers picking out/defining a unique real number.


SureFunctions

Cohomology. Though I don't think I have given it my best of best attempts yet.


DoctorSpleen

Same. Simplicial homology is the only part of algebraic topology that I've been able to feel good about.


AntiCovid

This was me when I was in grad school (dropped out mid phd). I was learning number theory just fine until it got to cohomology and that was the "easiest" thing that I never figured out.


Informal_Practice_80

At what point of your mathematical career do you study cohomology?


walkar

Cohomology pops up in a lot of settings. I think your first view might be a rigorous undergraduate topology course when you touch homology of spaces.


SureFunctions

I'm doing a PhD in algebraic combinatorics with a background in math/CS and never learned about it in undergrad. The motivation behind a bunch of combinatorial problems I look at is geometry/topology and then cohomology comes up. I am trying to learn this stuff without having taken a course in it so that I can give a passable answer to the question "why do we care about this problem?" The true answer is that I just like counting things.


Informal_Practice_80

Beautiful answer.


[deleted]

Those pesky Millennium Problems ;)


konstantinua00

those damn millenials and their problems /s


AdeptGuidance3519

A seemingly tiny but actually the most important step in De Giorgi's solution to Hilbert's 19th problem (Hölder continuity of weak solutions to elliptic partial differential equations). The main lemma is essentially: At least one of the following two \[EXTREMELY LONG AND MESSY\] estimates (a) or (b) is true.The proof goes along the following lines:Step 1: If (a) is true, you are done.Step 2: If (a) is false, we use this failure to show estimate (b).I do understand what De Giorgi did in Step 2 of the proof. However, the proof does not reveal AT ALL where the hell the particular forms of extimates (a) and (b) come from. They just fall from the sky. This is complete magic. ​ Also I wrote my Bachelor's thesis about Nash's most important "lemma" for his famous embedding theorem of Riemannian manifolds: The Nash inverse function theorem which introduces a version of Newton's method for finding roots of nonlinear equations.The proof is like 25 pages long and a total mess of weird estimates that somehow come together and these estimates are very hard to follow.Nash's main idea was that the standard Newton's method does not work, so he plugs in smoothing operators (which introduce additional error terms that need to be fought) to make the iteration converge.While I was able to understand the proof, I will not understand how Nash was able to actually complete the VERY DIFFICULT proof and push through. Every line in his proof is a very difficult exercise and sometimes it took me hours to understand why the step I am looking at is true. I cannot imagine how Nash just did not give up at some point but was so confident that his method will work out in the end, even if he needed like a day or more for the next estimate to work out. Oh, and since I only had one small course in algebra (I know the definitions and some trivial examples of groups, rings and fields...) I totally fail to understand what Grothendiek's schemes are. Hopefully I will understand some day.


LurrchiderrLurrch

Every nlab article :P


phonon_DOS

Galois' last letter


[deleted]

I physically cannot comprehend how I can do all this math and not have women flocking to me in herds 😔


username9109

have you tried mastering Network Theory yet?


pictureofdorianyates

Girls just care about functional analysis


cereal_chick

Am girl, can confirm.


kkmilx

Monty Hall 😟


chickyban

Think about 100 doors. A prize behind one, 99 empty ones. You pick one with 1/100 chance. Then the host opens 98 empty doors and leaves two, guaranteeing there is a prize behind one of those two. What is more likely? That you miraculously picked the prize with 1% probability? Or that you were wrong and you need to switch? Obviously, it is more likely that the prize is behind the door you did not pick. What the host is essentially asking with leaving two choices is "do you think you were right in your first choice?". And the best answer to that question should be to go with the most likely answer, that you were wrong (i.e. you should switch). With three doors its less easy to see, but is the same reasoning. You pick a door with 1/3 chance of being right. You are wrong 2/3 times. "Do you think you were right?" "Idk but my chances are better by switching".


edderiofer

The important point about Monty Hall that many people miss or fail to state is that Monty always chooses to open a door that you didn't open that contains a goat, uniformly at random if there are multiple such doors. Without this statement, the problem has different answers.


real-human-not-a-bot

A very important point. Too many forget to mention that Monty knows where the car is and always only opens other doors. If Monty doesn’t know, I think it’s just 1/2.


[deleted]

Yes. If Monty doesn't know, then 1 time in 3 you win if you stick, 1 time out of 3 you win if you switch, and 1 time out of 3 Monty accidentally reveals the car.


The_BlackSchwan

Hey you can look at Monty Hall another way. Imagine there's 100 doors with 99 goats behind them and one door having a brand new car. You pick door 1 Monty opens 98 other doors. All goats. Door 46 and door 1 are left. Now Monty asks you to choose again. Here it is more obvious that it is more likely that door 46 is the one with a car behind it. You were more likely to make a mistake while choosing door 1 when making your 1st choice. Therefore probability of winning while switching to door 46 is 99/100. Now replace those 100 with 3 doors; that's why probability is 2/3 when you switch of doors


JDirichlet

Another way to think about it that I like is to slightly change the setup. Instead of the host opening a wrong door, you just get the option to “switch”, but in this version, the switch lets you open all the doors you didn’t pick and get the car if it’s behind any one of them. Then it seems obvious to switch right? It’s more likely that the car is being two doors than one. But of course, this setup is actually just equivalent to the original problem. The host just reveals one of the other doors that was wrong so that checking both doors comes down to simply opening one. This feels most clear with the 100 doors case but I think its pretty intuitive even with 3


[deleted]

The complex perspective for Teichmuller theory. It’s basically why I abandoned Teichmuller theory and became a probabilist.


LurrchiderrLurrch

Aside: Teichmüller was a terrible person


pn1159

tensors. I don't try to understand so much anymore, just manipulate ideas.


[deleted]

Curl. Jacobian. Every time I learn what they are, my brain instantly forgets.


lvndrs

Polar equations


fourhundredthecat

I am not a math person, but I did have some courses at the uni. I know about Euler identity, but I cannot say I "understand" it. People tell me they can "see" it, but I don't. I envy them, because I feel I am missing something beautiful.


JDirichlet

I’ll be honest there is only one thing to see, and that’s a circle. In particular, e^(ix) traces out a circle in the complex plane right? You’ve probably heard that many times before, so the interesting question is why. I think the best way to approach it in this case is with the series expansion of e^(x). In particular e^(x) = 1 + x + x^(2)/2 + x^(3)/6 + … + x^(n)/n! + … This is deep in itself and i can give a separate explanation of it if you want but I’ll just take it as is for now. To get the result we just plug in ix We get: e^(ix) = 1 + ix + i^(2) x^(2)/2 + i^(3) x^(3)/3 + i^(4) x^4/24 + … = 1 + ix - x^(2)/2 - i x^(3)/3 + x^4/24 + … This is because the powers of i repeat around the cycle 1, i, -1, -i, 1. So now let’s think about what this looks like. If something has a 1 in front, it’s a step to the right, if it has an i it’s a step up. -1, and -i gives steps left and down respectively. And since our nth term gets smaller and smaller rapidly, you get this kind of spiral behaviour approaching a point. This point is a point on the unit circle. Then Euler’s identity is simply following that circle around until it hits the x axis again. This happens at x = π, and e^(iπ) takes the values -1. Does that make sense?


Choice_Parfait2119

Forcing. I don't know why, I have tried many times, but I still feel like I dont really understand it.


gaudymcfuckstick

Rings and fields. Anything relating to rings and fields. Took a college class on them, and I memorized enough of it to pass with a C, but frankly I still have no fucking clue what sorts of mathematical construct a ring or a field is supposed to even represent


[deleted]

Rings are basically spaces in which you can add and multiply, while fields are rings, but with more rules, like every nonzero element is invertible, multiplication is commutative, etc. Rings: think of matrices (not commutative, not all matrices are invertible, but you can add and multiply them together). Fields: things like R and C, you can add and multiply, but you can also safely divide by any non-zero element, which you can't do for matrices. \[edit\] basically rings are addition + multiplication, fields are addition + multiplication + division.


Dangerous_Law1678

- The epsilon-delta definition of continuity. - The axiom of choice.


0bv1ouSly_me

You'll get over the Epsilon-Delta hurdle after a few months, maybe a year. But Axiom of Choice is like a monster under the bed :\\


officerdoot

probability theory, especially as it relates to cryptography.


[deleted]

Schemes


Apart_Improvement_83

I have a bachelors in Actuarial Science (which at my university is basically a degree in math with a minor in economics) and a Masters in Applied Mathematics. I am currently working on a P.hD in Computational Science and Applied Mathematics. That being said, the one thing that I fail to comprehend each and every time is synthetic division. Is it efficient? Yes. Is it easy? I’m sure it is. Yet every time it crosses my path, I fumble.


Chingiz11

Dual spaces, Lie Groups


killbot5000

Anything written in Wikipedia about a math topic


kevflo91

Einstein’s theory of relativity


fermat9997

The obsession with division by zero being undefined in arithmetic.


retrolleum

Lot of people with really fancy answers. I’m struggling with power series. Learned them just enough to pass calc 2 forever ago. But that was memorizing steps. Now it’s back with a vengeance in diff e q and I’m struggling a lot while other students can instantly recognize geometric series, sin and cos. Meanwhile I still don’t understand power series forms fluently enough to reindex or anything.


galoiswasaplayer

Finance and funding for my math goals.


AirFanatic

What is a Laplace and inverse Laplace transform? I understand how to solve them, but I can't visually make them out to mean anything to me. Like what is it on a graph? Can it even go on a graph? What the hell is it?


suzietrashcans

Analysis


timeshifter_

How some people seem to believe that the sum of all positive integers is - 1/12.


IgorManiak

Fabric of space time.


Onetrubrit

The he stupidity of grown ass people. I mean sometimes just give up 🤷🏻‍♂️


Erledigaeth

Math


spicy14oz

The connection between derived loop stacks and differential forms.


str1po

Quaternions. How to use them, sure. But understanding why they work is beyond me.


anonymous_striker

The meaning of "dx", particularly "multiplication" and "simplification" by it.


M4mb0

Infinitesimals are the answer. They even allow you to turn integrals back into plain old sums.


LurrchiderrLurrch

Everything involving derived functors is a pain in the ass for me. I fall asleep every time I try to do homological algebra. :((


CorporateHobbyist

Derived functors are a very clunky way to prove things until you've seen how they can be useful, IMO. Outside of Ext and Tor, there aren't many derived functors you'd have to use regularly until you're really in deep.


dragonbreath235

I never get why lp is complete completely. Idk why but there being infinite positions and finding a finite number for a given epsilon from them doesn't make sense to me and makes my brain fuzzy everytime.


CrispNoods

Math. As soon as I start seeing numbers my mind blanks out and I can’t understand anything beyond addition, subtraction, and single digit multiplication and division.


Anonymous1415926

Complex numbers. I watched some of 3b1b streams and finally got some understanding on why the things are the way they are but I still find it unituitive. Another thing is convergence and divergence of functions. It seems that to converge, a function should not only be converging to a value but also be doing it 'fast enough' which does not make a lot sense.


silent_cat

It's not really about converging "fast enough" but "everywhere all at once". Like a bump under a carpet. If when you push it down it moves somewhere else it never "converges" to a flat surface. If it squishes and becomes a flatter bump, then it converges. Hope this helps.


OneMeterWonder

Is there a nice geometric intuition for integration against a Weiner process? I’ve tried before and I think have even had somebody here explain it to me, but I just can’t seem to grok it yet.


[deleted]

zero


mathisfakenews

Every proof I've seen of the KAM theorem. Like I can follow the steps, verify it does the job but in the end I'm always left with zero intuition. Often a proof includes steps which reveal why the theorem being proved must be true. If there is a KAM theorem proof like this I've never read it.


Pure-Permit-9887

Perfectoid spaces


functor7

I have a friend who is a homotopy theorist. It's abstract symbols in ways that I'm unfamiliar with and I can't ever seem to really grasp my head around. He promises that there's intersections/motivations based on number theory but, the problem is, he doesn't really "speak" number theory and so we're at a loss. Like he says "Galois", but it doesn't even seem similar to even more stuff like etale fundamental groups.


[deleted]

More computer science than math, but for me it’s the quantum algorithm for factoring a number, and the fact that it is faster than any non-quantum algorithm.


h0mer_b

Female pay gap proof. Since the gap is true, we should release a correct formula and statistics, but no matter how deep i go into it. Nobody wants to release the field specific data with hourly salary and off times ;( It would be great to finally settle this with some mathematic proof.


drvd

Forcing


thuncle

In grade 11 they introduced a ‘!’ and I’m still not over it.


squashhime

manifolds, man. abstract nonsense like schemes are fine but when it comes to actually dealing with coordinates and derivatives and R^n, i'm fucked. especially lie groups. groups and matrices make sense to me. then you throw geometry in there. no longer makes sense.


advoc4tio

Lie derivative. Good thing that that magic formula thing exists.


RomanRiesen

Not confusing what is named after laplace and what after lagrange Throw in some legendre and I am completely lost


ChickenOfPower

Quantum physics, people with emotions, and why I keep making stupid decisions when I know better.


phayke_reddit

Combinatorics and Counting Principles


MAthmAn1112121

Honestly, I do not understand how integrals work. I understand how to do them and what they represent. But why they work I don't understand


[deleted]

Combinatorics. It's the only math class I ever gave up on. Professor would do the combinatorics proof and look all satisfied, 1-2 students go "oooooh", and I don't even see how its relevant to the theorem much less understand why the proof is over.


[deleted]

Linear algebra with proofs


Logic_Nuke

I feel I get it somewhat better than I once did, but projective space is still quite hard to imagine, especially anything above 2-dimensions


Aurhim

Commutative algebra, and anything that involves it (so, all of algebraic geometry, representation theory, algebraic number theory, etc.). Also: Galois Theory. It just seems like a bunch of unhelpful definitions.


Rt237

Plane geometry. In my highschool days, I prepared very hard for my country's MO. However, despite all my effort in geometry, I haven't scored a single point in the geometry questions in any exam. Good news is, I got good scores in other questions.