Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tf.exp() for lambda_2 in KdV function #34

Open
cruzchue opened this issue May 6, 2021 · 5 comments
Open

tf.exp() for lambda_2 in KdV function #34

cruzchue opened this issue May 6, 2021 · 5 comments

Comments

@cruzchue
Copy link

cruzchue commented May 6, 2021

This question has been asked before by someone else, but it stayed unanswered. I am wondering about the answer myself, as I was reusing the KvD.py code with a different equation and not getting the parameters.

The KdV.py example estimates two parameters : lambda_1 and lambda_2. Both are coefficients that go into the differential operator (F) as:

F = -lambda_1 U U_x - lambda_2 U_xxx

that is, lambda_1 multiplies the solution (U) and its first derivate (U_x), lambda_2 multiplies the third derivative (U_xxx). However, within the functions net_U0 and net_U1, the coefficient lambda_2 has the operation exp applied to it.

I mean, lambda_1 = self.lambda_1, but lambda_2 = tf.exp(self.lambda_2). Perhaps I am missing something. It should not be lambda_2 = self.lambda_2 instead?

@andrewforde1
Copy link

Hi Eduardo,

This ensures that lambda_2 is always a positive value as an exponential term will always be positive. Sorry I don't have a source but I remember reading that this method is statistically better than using other methods such as squaring the term

Hope this helps,
Andrew

@Schrodinger-E
Copy link

Hi Eduardo,

This ensures that lambda_2 is always a positive value as an exponential term will always be positive. Sorry I don't have a source but I remember reading that this method is statistically better than using other methods such as squaring the term

Hope this helps, Andrew

But for the unknown coefficients of the governing equation, we cannot determine positive and negative a priori.

@andrewforde1
Copy link

Hi Eduardo,
This ensures that lambda_2 is always a positive value as an exponential term will always be positive. Sorry I don't have a source but I remember reading that this method is statistically better than using other methods such as squaring the term
Hope this helps, Andrew

But for the unknown coefficients of the governing equation, we cannot determine positive and negative a priori.

I'm not familiar with the KdV equation but if you look at the paper "Physics Informed Deep Learning (Part II): Data-driven
Discovery of Nonlinear Partial Differential Equations" this might be due to the difference between the Burgers equation and KdV.

Burgers: u_t + lambda_1 * uu_x - lambda_2 * u_xx = 0

KdV: u_t + lambda_1 * uu_x + lambda_2 * u_xxx = 0

They both use the same form for the nonlinear operator:
N[u] = lambda_1 * term1 - lambda_2 * term2
So keeping lambda_2 positive may be because the second term is added rather than subtracting. As I said, I don't know anything about these equations so could be completely wrong but hopefully this helps

@Schrodinger-E
Copy link

Hi Eduardo,
This ensures that lambda_2 is always a positive value as an exponential term will always be positive. Sorry I don't have a source but I remember reading that this method is statistically better than using other methods such as squaring the term
Hope this helps, Andrew

But for the unknown coefficients of the governing equation, we cannot determine positive and negative a priori.

I'm not familiar with the KdV equation but if you look at the paper "Physics Informed Deep Learning (Part II): Data-driven Discovery of Nonlinear Partial Differential Equations" this might be due to the difference between the Burgers equation and KdV.

Burgers: u_t + lambda_1 * uu_x - lambda_2 * u_xx = 0

KdV: u_t + lambda_1 * uu_x + lambda_2 * u_xxx = 0

They both use the same form for the nonlinear operator: N[u] = lambda_1 * term1 - lambda_2 * term2 So keeping lambda_2 positive may be because the second term is added rather than subtracting. As I said, I don't know anything about these equations so could be completely wrong but hopefully this helps

After testing, I guess the root cause is exp(-6.0)=0.0025. Then, for the Burgers equation, lambda_1=1.0 and lambda_2=exp(-5.75)=0.00318. For the kdV equation, lambda_1=1.0 and lambda_2=exp(-6.0)=0.0025. Good initial values are good for training.

@cruzchue
Copy link
Author

cruzchue commented Nov 4, 2021

thanks for your answers. The code has been pretty good for us. I mean, for cases such as (lambda_1* term1) + (lambda_2*term2), which covers a lot of PDEs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants