I tried to get the Mathematica version to do something useful, but in typical "mathematician minimalist style" it squeezed everything into one enormous, terse, but useless blob. The code on GitHub works as a demo only. Even after trying a few different syntax variations for about twenty minutes, I couldn't figure out how to feed the "ILT" function something that would spit out what I expect.
In case the original authors ever come across this article: There is a standard for writing Mathematica modules! Please take a look at some other modules available online, and see how they split the code into small, reusable functions.
Even the name is too short. In the Mathematica naming convention it should be called "InverseLaplaceTransformCME[...]" or something like that. Ideally, use the same calling convention as the built-in function, documented here: https://reference.wolfram.com/language/ref/InverseLaplaceTra...
This would allow your function to be a drop-in replacement, allowing users to switch between the symbolic and approximate versions trivially.
The researchers already did the hard work! Programmers should see this as an opportunity to thank scientists and develop their own polished version. We shouldn't expect to receive everything ready to import directly into our projects.
There must be some name for the effect seen here so regularly, which might best be described as the contrast in magnitudes of something being posted to the comments made about it.
In this case we have a fundamental contribution to mathematics that is succinctly captured in 252 words, producing a 209 word complaint essentially about whitespace.
Would you be able to provide one or two examples of what you consider well written Mathematica modules? Or provide a reference on Mathematica programming that you liked?
A little ELI5 for those who haven't had Laplace transforms at school, from someone who only had a Laplace 101 course, so for what's it worth: Laplace transforms allow you to convert differential equations into easier equations, and back: the differentials and integrals become multiplications and divisions. So you can take a differential equation, transform it into the Laplace domain, manipulate it, and convert it back. And that's cool because differential equations tend to appear everywhere, for instance to model springs, electrical circuits with caps and coils, the surface of a soap bubble in a metal rod, etc. A sibling is the z-transform, which is like the digital version. This one is used for instance to design digital audio filters. I'm sure some math wizards here can elaborate and correct me.
I don't have any real understanding over the Laplace transform, but I understand Fourier transform well enough that it makes sense to me. Back then, I saw an claim that Laplace transform is a generalization of Fourer transform in the sense that it transforms a function not only to a space of frequencies and phases of sine waves, but to a larger space of parameters of exponentials. Note that the parameter space of the sine waves is subset of the (complex) parameter space of exponentials.
Sounds right to my (very very rusty) recollection. Laplace transforms are a magic trick that let you easily solve some kinds of differential equations.
Something also worth mentioning is that it isn't just useful when dealing with differentiation / anti-differentiation but also when dealing with convolution.
I've had some problems understanding the Laplace transform. Maybe somebody here can point me towards some material.
I have an interest understanding how IIR filters are designed, and I always get stuck at this part in DSP books. The Laplace transform is used, but as well as finding the mathematics difficut I don't really understand why it is being used at all. I think it is trying to replicate the effect of an analog circuit?
rollulus gave a good summary of Laplace transforms and what they do. For some more context, they appear regularly in applied probability (e.g. finance, insurance, physical models including dams). A typical problem is dealing with sums of non-negative random variables. Let's say you want the distribution of n independent copies of a non-negative random variable with distribution function F. The hard way is the n-fold convolution or essentially evaluating an n-dimensional integral. The easy way is using the Laplace transform of F and simply raising it to the power of n.
The result isn't always invertible analytically, but you can almost always invert it numerically and this is why techniques like the one outlined in the paper are so important.
This is a fantastic post and I thoroughly recommend reading it and the 2019 paper that summarises all their work for several reasons:
1. Very clear exposition of previous work and their own.
2. Clear evaluation metrics.
3. They've even made it easy for you to replicate their work and results.
I've never understood the use of the Laplace transform. Perhaps that's due to my mathematical exposure (theoretical qualitative analysis of pdes). Since the Laplace transform lacks the duality of the Fourier transform, it doesn't seem to have a place in research mathematics. But I probably think of a dozen fundamental uses of the Fourier transform, from Bourgain spaces to evaluating oscillatory integrals. And if you're working on some manifold with curvature then you generally need to be familiar with the eigenfunctions of the Laplacian on that manifold... not the basis of the Laplace transform.
I also know a bit of signal processing\numerical analysis, and I'm not familiar with any practical uses of the Laplace transform there. I don't believe it's used in the numerical solution of pdes or odes, whereas spectral methods are a huge area of study and (until recently, I think) were used in the GFS weather model. And most time series analysis tools either apply the Fourier transform or bail out of this approach and use statistical tools.
My version of Greenspun's 10th rule goes: any sufficiently complex program includes an FFT.
Can anyone help me out here? Is there a problem/theorem the Laplace transform solves/proves which the Fourier transform doesn't?
For a good explanation on Laplace Transform please check the video presentation link that I've provided in my other comments.
As for the Laplace Transform, it is mainly use in control system applications where the input/output include transient/damping/forcing signal waveforms (on and off unit circle) not only clean steady state signal waveforms (on unit circle). This paper provides a good overview of the sample usages of a Laplace Transform in Electrical and Electronics Engineering [1].
If what you meant by the duality Fourier Transform as FFT/IFFT, Laplace Transform has the equivalent in the form of Chirp-Z Transform (CZT) and recently discovered inverse CZT (ICZT), and the original HN discussions link of the discovery is also provided in my other comments. For CZT/ICZT potential useful application please check the other/older HN topic comments in [2].
Perhaps we should just wait and watch for the torrent of patent filings on this CZT/ICZT topic if the claim of ICZT is really true and feasible.
The Fourier transform is a line cut out of the Laplace transform, and the Laplace transform is the analytic continuation of the Fourier transform. So you should not be surprised to see the Fourier transform show up in all applications, because there is an FFT but no FLT.
Remember in linear algebra how you spent most of the time learning about eigenvalues and eigenvectors, and particularly how to "diagonalize" a matrix `A` into `A=PDP^-1`? Doing this makes `A` easier to work with, so problems that include `A` are often easier to solve if you replace `A` with `PDP^-1`.
The laplace transform is the same thing, but instead of matrices, it works on derivatives. Equations involving `d/dt` are often made easier to work with by instead using `s`.
This is an interesting promotion of an applied math result. From their promotional material it looks promising, though the unusual promotional approach makes me worry.
The actual paper is at https://www.sciencedirect.com/science/article/pii/S016653161.... This is a pretty obscure journal. The paper is pretty "soft" -- lots of numerical testing of their approach vs. other well-known approaches and not very much theoretical analysis of convergence rates or such.
The main claim seems to be that their approach has better numerical properties for discontinuous functions and that it can be effectively implemented to high order using double precision arithmetic.
Why do you say they are not associated? They seem part of a research group of the Technical University of Budapest, looking at the papers affiliations.
Trying to put together how a new numerical method works scouring for papers with different nomenclatures, different sets of authors, different implementations etc. is often a huge pain. I wish these "landing pages" became a standard, or that a standard repository for them became available. Something like, this is our technique, these are the relevant papers, and here is some demo code.
I'd rather have an applied paper have tests, comparisons and source code than lots of theory and being hard to reproduce because "implementation details" don't appear in the paper.
Thanks the authors for putting the code out there for anyone to reproduce and not fall into the unreproduceable "science" that is plaguing us at the moment[1].
Inverting the Laplace transform is a central problem in computational physics, since it connects imaginary-time results (easier to obtain numerically) to real-time response.
Over the years a number of approaches have been developed for the inverse Laplace transform, such as MaxEnt, GIFT and many others.
I would love to see how this new approach fares against those.
I feel like this is what Tim Berners-Lee imagined the World Wide Web to be: sharing knowledge and research with interactive media and hypertext, instead of printed papers. It found new applications outside academia, but this site is probably close to the original idea.
Thank you for the exposure and feedback! We really appreciate it.
About the code. We have added comments and simple running examples to the code on github. Hopefully that helps make the code more accessible to everyone.
About the contribution. Classic numerical inverse Laplace
transformation methods work in some cases but fail in others, while the CME method always gives a good approximation at low computational cost. We recommend it for general use when you just want to invert a
function numerically without spending effort to figure out what methods might be applicable.
This story is six days old. Don't expect a lot of replies to your comment to come, but know that your comment & more importantly the improvements (and ofc the result!) get appreciated.
Not really my, well, domain (sorry), so my only contribution is that there's a spelling error in the dropdown: it refers to the Heaviside step function as the 'Heavyside' function.
Just spitballing here, about an application of the Laplace transform. We have a product that allows the users to use machine learning in a semi-automatized way, without deeply understanding hyperparameter optimization, model testing, selection and evaluation and such.
There was some talk about supporting the prediction of time-series data. I have absolutely no knowledge of how time-series data should be pre-processed and what kind of algorithms are common or applicable in general. (I'm not in charge of the R&D of the data-science-y features) However, it seems like Laplace transform as a pre-processing step ticks a lot of the checkboxes. As a superset of Fourier, it supports periodic changes in time series, and being about exponentials, it also allows for growth (or decreasing) over time, allowing to transform a time series to data that is more applicable to classical ML algorithms.
Is Laplace transform actually used for such usecases?
IDK, but Fourier, and specifically the more specialized, the DCT certainly is.
Part of the reason for this is because the algorithms to go from discrete data points into a wave form are fairly well known and fast.
DCT is the foundation for most Lossy encoding formats. Using it for time series data makes a lot of sense, especially if you are optimizing for storage space.
Here's a little ELI5 about the Laplace and inverse Laplace transform, and why the inverse transfrom is fiendishly difficult, and therefore why this result is extraordinarily important.
Imagine you win the Megabucks lottery. The win is one hundred million dollars. You go to claim your money, but you are told you can choose between the full amount given in monthly payments over 20 years, or a lump sum. But the lump sum is not the full $100MM, it is the present value of the monthly payments discounted at a rate of 5%. To discount an amount received 10 years from now at the 5% rate, you simply divide by 1.05 ^ 10, which is very close to exp(0.05 x 10). If you actually calculate this present value using the exponential function, you say that you use "continuously compounded rates".
So, for any stream of future cashflows one can calculate the present value by multiplying the cashflows with appropriate discount factors (of the type exp(-r t)) and adding them up. For different discounting rates r you obtain different present values. This present value as a function of r is the Laplace transform of the cashflow stream as a function of t.
The inverse Laplace transform is solving the riddle: if I tell you the present value (PV) of some cashflows for any (positive) discount rate you want, can you calculate the cashflows?
Why is this a difficult problem? Because it is "ill-conditioned". Imagine the following two cashflow streams: in the first you get $1MM every year for the next 10 years and another $1MM one hundred years from now. In the second you also get $1MM annually for the first ten years but the last $1MM is 101 years from now. For a zero discount rate the value of both cashstreams is $11MM. For a 5% they are both around $9MM and different by about $300, which is about 0.003%. For any discount rate the PV's will be very very close.
In some cases "in real life" this closeness could be below machine precision level. If someone gives you 2 sets of inputs where their Laplace transforms are different by less than the machine precision levels for all values of the discount rate, then there is no hope to tell them apart knowing only their trasforms only, at least not if you don't use some multiple precision libraries.
That should give you an intuition why the inverse Laplace transform is nasty. All hope is not lost though. First of all, in a typical application the Laplace transform of a function is known in closed (analytical) form, so you can actually use multiple precision libraries if you so wish. I have seen cases where people were using precision of 2000 digits in Mathematica for this. It's slow as hell, but it gets the job done.
Separately, you are free to calculate the Laplace transform at any "discount rate", including complex values. If you are smart about how to choose these values, you can come up with good recipes for the Laplace transform.
For hundreds of years now, the general wisdom was that various inverse numerical Laplace transform algorithms have strengths and weaknesses, but no single one is universally good.
Maybe this one will be, and if so it will be indeed revolutionary.
One of the big breakthroughs is Machine Learning/Neural Networks (NN) is to use the derivative of the error to update the weights of the network (backpropagation). Thinking if CME could be used to avoid local min/max in some way, to speed up the training process.
It seems to be advanced maths but I wonder why the designer (he already know in advance the desired form of the function in order to give him a desirable property (in both cases being more smoothed / continuous / centered)) does not draw graphically the desired function and let a software solve, find automatically the best approximation of the function?
EDIT: well it seems to be a general function approximator so my point doesn't apply (but still apply for the new activation functions in machine learning)
[+] [-] jiggawatts|5 years ago|reply
In case the original authors ever come across this article: There is a standard for writing Mathematica modules! Please take a look at some other modules available online, and see how they split the code into small, reusable functions.
Even the name is too short. In the Mathematica naming convention it should be called "InverseLaplaceTransformCME[...]" or something like that. Ideally, use the same calling convention as the built-in function, documented here: https://reference.wolfram.com/language/ref/InverseLaplaceTra...
This would allow your function to be a drop-in replacement, allowing users to switch between the symbolic and approximate versions trivially.
You may even want to contact Wolfram Research! They just implemented a new "Asymptotics" module that includes approximate inverse Laplace transforms as a feature. See: https://reference.wolfram.com/language/guide/Asymptotics.htm...
They might add your approach into the 12.2 release, which would mean that many thousands of people could automatically benefit from your hard work!
[+] [-] coliveira|5 years ago|reply
[+] [-] james412|5 years ago|reply
In this case we have a fundamental contribution to mathematics that is succinctly captured in 252 words, producing a 209 word complaint essentially about whitespace.
[+] [-] prof-dr-ir|5 years ago|reply
[+] [-] jes5199|5 years ago|reply
[+] [-] gspr|5 years ago|reply
You're complaining that it's both too minimalist and too enormous?
[+] [-] rollulus|5 years ago|reply
[+] [-] GolDDranks|5 years ago|reply
Is this claim correct?
[+] [-] taneq|5 years ago|reply
[+] [-] contravariant|5 years ago|reply
[+] [-] kpmah|5 years ago|reply
I have an interest understanding how IIR filters are designed, and I always get stuck at this part in DSP books. The Laplace transform is used, but as well as finding the mathematics difficut I don't really understand why it is being used at all. I think it is trying to replicate the effect of an analog circuit?
[+] [-] cm2187|5 years ago|reply
What are the domains where this new method can be applied? Is it mostly physics simulations and the likes?
[+] [-] barbecue_sauce|5 years ago|reply
Laplace transforms are an entire course?
[+] [-] steve76|5 years ago|reply
[deleted]
[+] [-] cesarosum|5 years ago|reply
The result isn't always invertible analytically, but you can almost always invert it numerically and this is why techniques like the one outlined in the paper are so important.
This is a fantastic post and I thoroughly recommend reading it and the 2019 paper that summarises all their work for several reasons:
1. Very clear exposition of previous work and their own.
2. Clear evaluation metrics.
3. They've even made it easy for you to replicate their work and results.
[+] [-] dls2016|5 years ago|reply
I also know a bit of signal processing\numerical analysis, and I'm not familiar with any practical uses of the Laplace transform there. I don't believe it's used in the numerical solution of pdes or odes, whereas spectral methods are a huge area of study and (until recently, I think) were used in the GFS weather model. And most time series analysis tools either apply the Fourier transform or bail out of this approach and use statistical tools.
My version of Greenspun's 10th rule goes: any sufficiently complex program includes an FFT.
Can anyone help me out here? Is there a problem/theorem the Laplace transform solves/proves which the Fourier transform doesn't?
[+] [-] teleforce|5 years ago|reply
As for the Laplace Transform, it is mainly use in control system applications where the input/output include transient/damping/forcing signal waveforms (on and off unit circle) not only clean steady state signal waveforms (on unit circle). This paper provides a good overview of the sample usages of a Laplace Transform in Electrical and Electronics Engineering [1].
If what you meant by the duality Fourier Transform as FFT/IFFT, Laplace Transform has the equivalent in the form of Chirp-Z Transform (CZT) and recently discovered inverse CZT (ICZT), and the original HN discussions link of the discovery is also provided in my other comments. For CZT/ICZT potential useful application please check the other/older HN topic comments in [2].
Perhaps we should just wait and watch for the torrent of patent filings on this CZT/ICZT topic if the claim of ICZT is really true and feasible.
[1]http://sces.phys.utk.edu/~moreo/mm08/sarina.pdf
[2]https://news.ycombinator.com/item?id=21232296
[+] [-] whatshisface|5 years ago|reply
[+] [-] mgraczyk|5 years ago|reply
The laplace transform is the same thing, but instead of matrices, it works on derivatives. Equations involving `d/dt` are often made easier to work with by instead using `s`.
Longer answer: https://www.quora.com/What-is-the-purpose-of-Laplace-transfo...
[+] [-] qppo|5 years ago|reply
BIBO stability of IIR systems
[+] [-] chessweb01|5 years ago|reply
[+] [-] dzdt|5 years ago|reply
The actual paper is at https://www.sciencedirect.com/science/article/pii/S016653161.... This is a pretty obscure journal. The paper is pretty "soft" -- lots of numerical testing of their approach vs. other well-known approaches and not very much theoretical analysis of convergence rates or such.
The main claim seems to be that their approach has better numerical properties for discontinuous functions and that it can be effectively implemented to high order using double precision arithmetic.
[+] [-] JorgeGT|5 years ago|reply
As a researcher I really appreciate the promotion effort, some years ago I came across a similar "landing page" for a numerical technique that helped me a lot: http://people.ece.umn.edu/users/mihailo/software/dmdsp/,
Trying to put together how a new numerical method works scouring for papers with different nomenclatures, different sets of authors, different implementations etc. is often a huge pain. I wish these "landing pages" became a standard, or that a standard repository for them became available. Something like, this is our technique, these are the relevant papers, and here is some demo code.
[+] [-] mratsim|5 years ago|reply
Thanks the authors for putting the code out there for anyone to reproduce and not fall into the unreproduceable "science" that is plaguing us at the moment[1].
[1]: http://polaris.imag.fr/arnaud.legrand/teaching/2016/mosig_sm...
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] nisuni|5 years ago|reply
Over the years a number of approaches have been developed for the inverse Laplace transform, such as MaxEnt, GIFT and many others.
I would love to see how this new approach fares against those.
[+] [-] turbinerneiter|5 years ago|reply
[+] [-] peterburkimsher|5 years ago|reply
[+] [-] koheripbal|5 years ago|reply
This might really spark some interesting breakthroughs that I am not smart enough to predict.
Core math breakthroughs like this have huge and unpredictable knock-on advancements...
[+] [-] hilles|5 years ago|reply
Thank you for the exposure and feedback! We really appreciate it.
About the code. We have added comments and simple running examples to the code on github. Hopefully that helps make the code more accessible to everyone.
About the contribution. Classic numerical inverse Laplace transformation methods work in some cases but fail in others, while the CME method always gives a good approximation at low computational cost. We recommend it for general use when you just want to invert a function numerically without spending effort to figure out what methods might be applicable.
[+] [-] no_identd|5 years ago|reply
[+] [-] signa11|5 years ago|reply
i found this (https://johnflux.com/2019/02/12/laplace-transform-visualized...) to be pretty cool as well.
[+] [-] nabla9|5 years ago|reply
Fourier transform: sinusoidals
Laplace transform: sinusoidals + exponentials
Here is nice video explaining it: https://www.youtube.com/watch?v=n2y7n6jw5d0
[+] [-] MaxBarraclough|5 years ago|reply
Not really my, well, domain (sorry), so my only contribution is that there's a spelling error in the dropdown: it refers to the Heaviside step function as the 'Heavyside' function.
[+] [-] GolDDranks|5 years ago|reply
There was some talk about supporting the prediction of time-series data. I have absolutely no knowledge of how time-series data should be pre-processed and what kind of algorithms are common or applicable in general. (I'm not in charge of the R&D of the data-science-y features) However, it seems like Laplace transform as a pre-processing step ticks a lot of the checkboxes. As a superset of Fourier, it supports periodic changes in time series, and being about exponentials, it also allows for growth (or decreasing) over time, allowing to transform a time series to data that is more applicable to classical ML algorithms.
Is Laplace transform actually used for such usecases?
[+] [-] cogman10|5 years ago|reply
Part of the reason for this is because the algorithms to go from discrete data points into a wave form are fairly well known and fast.
DCT is the foundation for most Lossy encoding formats. Using it for time series data makes a lot of sense, especially if you are optimizing for storage space.
[+] [-] nebukadnezar|5 years ago|reply
[+] [-] credit_guy|5 years ago|reply
Imagine you win the Megabucks lottery. The win is one hundred million dollars. You go to claim your money, but you are told you can choose between the full amount given in monthly payments over 20 years, or a lump sum. But the lump sum is not the full $100MM, it is the present value of the monthly payments discounted at a rate of 5%. To discount an amount received 10 years from now at the 5% rate, you simply divide by 1.05 ^ 10, which is very close to exp(0.05 x 10). If you actually calculate this present value using the exponential function, you say that you use "continuously compounded rates".
So, for any stream of future cashflows one can calculate the present value by multiplying the cashflows with appropriate discount factors (of the type exp(-r t)) and adding them up. For different discounting rates r you obtain different present values. This present value as a function of r is the Laplace transform of the cashflow stream as a function of t.
The inverse Laplace transform is solving the riddle: if I tell you the present value (PV) of some cashflows for any (positive) discount rate you want, can you calculate the cashflows?
Why is this a difficult problem? Because it is "ill-conditioned". Imagine the following two cashflow streams: in the first you get $1MM every year for the next 10 years and another $1MM one hundred years from now. In the second you also get $1MM annually for the first ten years but the last $1MM is 101 years from now. For a zero discount rate the value of both cashstreams is $11MM. For a 5% they are both around $9MM and different by about $300, which is about 0.003%. For any discount rate the PV's will be very very close.
In some cases "in real life" this closeness could be below machine precision level. If someone gives you 2 sets of inputs where their Laplace transforms are different by less than the machine precision levels for all values of the discount rate, then there is no hope to tell them apart knowing only their trasforms only, at least not if you don't use some multiple precision libraries.
That should give you an intuition why the inverse Laplace transform is nasty. All hope is not lost though. First of all, in a typical application the Laplace transform of a function is known in closed (analytical) form, so you can actually use multiple precision libraries if you so wish. I have seen cases where people were using precision of 2000 digits in Mathematica for this. It's slow as hell, but it gets the job done.
Separately, you are free to calculate the Laplace transform at any "discount rate", including complex values. If you are smart about how to choose these values, you can come up with good recipes for the Laplace transform.
For hundreds of years now, the general wisdom was that various inverse numerical Laplace transform algorithms have strengths and weaknesses, but no single one is universally good.
Maybe this one will be, and if so it will be indeed revolutionary.
[+] [-] punnerud|5 years ago|reply
[+] [-] vmchale|5 years ago|reply
[+] [-] The_rationalist|5 years ago|reply
It seems to be advanced maths but I wonder why the designer (he already know in advance the desired form of the function in order to give him a desirable property (in both cases being more smoothed / continuous / centered)) does not draw graphically the desired function and let a software solve, find automatically the best approximation of the function?
EDIT: well it seems to be a general function approximator so my point doesn't apply (but still apply for the new activation functions in machine learning)