Look it is so simple, it just acts on an uncountably infinite dimensional vector space of differentiable functions.
fun fact: the vector space of differentiable functions (at least on compact domains) is actually of countable dimension.
still infinite though
Doesn’t BCT imply that infinite dimensional Banach spaces cannot have a countable basis
Uhm, yeah, but there’s two different definitions of basis iirc. And i’m using the analytical definition here; you’re talking about the linear algebra definition.
So I call an infinite dimensional vector space of countable/uncountable dimensions if it has a countable and uncountable basis. What is the analytical definition? Or do you mean basis in the sense of topology?
Uhm, i remember there’s two definitions for basis.
The basis in linear algebra says that you can compose every vector v as a finite sum v = sum over i from 1 to N of a_i * v_i, where a_i are arbitrary coefficients
The basis in analysis says that you can compose every vector v as an infinite sum v = sum over i from 1 to infinity of a_i * v_i. So that makes a convergent series. It requires that a topology is defined on the vector space fist, so convergence becomes well-defined. We call such a vector space of countably infinite dimension if such a basis (v_1, v_2, …) exists that every vector v can be represented as a convergent series.
i just checked and there’s official names for it:
- the term Hamel basis refers to basis in linear algebra
- the term Schauder basis is used to refer to the basis in analysis sense.
Ah that makes sense, regular definition of basis is not much of use in infinite dimension anyways as far as I recall. Wonder if differentiability is required for what you said since polynomials on compact domains (probably required for uniform convergence or sth) would also work for cont functions I think.
regular definition of basis is not much of use in infinite dimension anyways as far as I recall.
yeah, that’s exactly why we have an alternative definition for that :D
Wonder if differentiability is required for what you said since polynomials on compact domains (probably required for uniform convergence or sth) would also work for cont functions I think.
Differentiability is not required; what is required is a topology, i.e. a definition of convergence to make sure the infinite series are well-defined.
If not fraction, why fraction shaped?
When a mathematician want to scare an physicist he only need to speak about ∞
When a physicist want to impress a mathematician he explains how he tames infinities with renormalization.
Only the sith deal in ∞
…and Buzz Lightyear
The thing is that it’s legit a fraction and d/dx actually explains what’s going on under the hood. People interact with it as an operator because it’s mostly looking up common derivatives and using the properties.
Take for example
∫f(x) dx
to mean "the sum (∫) of supersmall sections of x (dx) multiplied by the value of x at that point ( f(x) ). This is why there’s dx at the end of all integrals.The same way you can say that the slope at x is tiny f(x) divided by tiny x or
d*f(x) / dx
or more traditionally(d/dx) * f(x)
.The other thing is that it’s legit not a fraction.
it’s legit a fraction, just the numerator and denominator aren’t numbers.
No 👍
try this on – Yes 👎
It’s a fraction of two infinitesimals. Infinitesimals aren’t numbers, however, they have their own algebra and can be manipulated algebraically. It so happens that a fraction of two infinitesimals behaves as a derivative.
Ok, but no. Infinitesimal-based foundations for calculus aren’t standard and if you try to make this work with differential forms you’ll get a convoluted mess that is far less elegant than the actual definitions. It’s just not founded on actual math. It’s hard for me to argue this with you because it comes down to simply not knowing the definition of a basic concept or having the necessary context to understand why that definition is used instead of others…
Why would you assume I don’t have the context? I have a degree in math. I could be wrong about this, I’m open-minded. By all means, please explain how infinitesimals don’t have a consistent algebra.
-
I also have a masters in math and completed all coursework for a PhD. Infinitesimals never came up because they’re not part of standard foundations for analysis. I’d be shocked if they were addressed in any formal capacity in your curriculum, because why would they be? It can be useful to think in terms of infinitesimals for intuition but you should know the difference between intuition and formalism.
-
I didn’t say “infinitesimals don’t have a consistent algebra.” I’m familiar with NSA and other systems admitting infinitesimal-like objects. I said they’re not standard. They aren’t.
-
If you want to use differential forms to define 1D calculus, rather than a NSA/infinitesimal approach, you’ll eventually realize some of your definitions are circular, since differential forms themselves are defined with an implicit understanding of basic calculus. You can get around this circular dependence but only by introducing new definitions that are ultimately less elegant than the standard limit-based ones.
-
Software engineer: 🫦
I still don’t know how I made it through those math curses at uni.
Calling them ‘curses’ is apt
Division is an operator
But df/dx is a fraction, is a ratio between differential of f and standard differential of x. They both live in the tangent space TR, which is isomorphic to R.
What’s not fraction is \partial f / \partial x, but likely you already know that. This is akin to how you cannot divide two vectors.
The world has finite precision. dx isn’t a limit towards zero, it is a limit towards the smallest numerical non-zero. For physics, that’s Planck, for engineers it’s the least significant bit/figure. All of calculus can be generalized to arbitrary precision, and it’s called discrete math. So not even mathematicians agree on this topic.
Having studied physics myself I’m sure physicists know what a derivative looks like.
Little dicky? Dick Feynman?
I found math in physics to have this really fun duality of “these are rigorous rules that must be followed” and “if we make a set of edge case assumptions, we can fit the square peg in the round hole”
Also I will always treat the derivative operator as a fraction
2+2 = 5
…for sufficiently large values of 2
Found the engineer
i was in a math class once where a physics major treated a particular variable as one because at csmic scale the value of the variable basically doesn’t matter. the math professor both was and wasn’t amused
Engineer. 2+2=5+/-1
Computer science: 2+2=4 (for integers at least; try this with floating point numbers at your own peril, you absolute fool)
comparing floats for exact equality should be illegal, IMO
Freshmen engineer: wow floating point numbers are great.
Senior engineer: actually the distribution of floating point errors is mindfuck.
Professional engineer: the mean error for all pairwaise 64 bit floating point operations is smaller than the Planck constant.
0.1 + 0.2 = 0.30000000000000004
pi*pi = g
units don’t match, though
Statistician: 1+1=sqrt(2)
I mean as an engineer, this should actually be 2+2=4 +/-1.
is this how Brian Greene was born?
I always chafed at that.
“Here are these rigid rules you must use and follow.”
“How did we get these rules?”
“By ignoring others.”
This very nice Romanian lady that taught me complex plane calculus made sure to emphasize that e^j*theta was just a notation.
Then proceeded to just use it as if it was actually eulers number to the j arg. And I still don’t understand why and under what cases I can’t just assume it’s the actual thing.
Let’s face it: Calculus notation is a mess. We have three different ways to notate a derivative, and they all suck.
Calculus was the only class I failed in college. It was one of those massive 200 student classes. The teacher had a thick accent and hand writing that was difficult to read. Also, I remember her using phrases like “iff” that at the time I thought was her misspelling something only to later realize it was short hand for “if and only if”, so I can’t imagine how many other things just blew over my head.
I retook it in a much smaller class and had a much better time.
It is just a definition, but it’s the only definition of the complex exponential function which is well behaved and is equal to the real variable function on the real line.
Also, every identity about analytical functions on the real line also holds for the respective complex function (excluding things that require ordering). They should have probably explained it.
She did. She spent a whole class on about the fundamental theorem of algebra I believe? I was distracted though.
I’ve seen e^{d/dx}
It legitimately IS exponentiation. Romanian lady was wrong.
e𝘪θ is not just notation. You can graph the entire function ex+𝘪θ across the whole complex domain and find that it matches up smoothly with both the version restricted to the real axis (ex) and the imaginary axis (e𝘪θ). The complete version is:
ex+𝘪θ := ex(cos(θ) + 𝘪sin(θ))
Various proofs of this can be found on wikipeda. Since these proofs just use basic calculus, this means we didn’t need to invent any new notation along the way.
I’m aware of that identity. There’s a good chance I misunderstood what she said about it being just a notation.
It’s not simply notation, since you can prove the identity from base principles. An alien species would be able to discover this independently.
What is Phil Swift going to do with that chicken?
The will repair it with flex seal of course
To demonstrate the power of flex seal, I SAWED THIS CHICKEN IN HALF!
deleted by creator