-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tf.exp() for lambda_2 in KdV function #34
Comments
Hi Eduardo, This ensures that lambda_2 is always a positive value as an exponential term will always be positive. Sorry I don't have a source but I remember reading that this method is statistically better than using other methods such as squaring the term Hope this helps, |
But for the unknown coefficients of the governing equation, we cannot determine positive and negative a priori. |
I'm not familiar with the KdV equation but if you look at the paper "Physics Informed Deep Learning (Part II): Data-driven Burgers: KdV: They both use the same form for the nonlinear operator: |
After testing, I guess the root cause is exp(-6.0)=0.0025. Then, for the Burgers equation, lambda_1=1.0 and lambda_2=exp(-5.75)=0.00318. For the kdV equation, lambda_1=1.0 and lambda_2=exp(-6.0)=0.0025. Good initial values are good for training. |
thanks for your answers. The code has been pretty good for us. I mean, for cases such as (lambda_1* term1) + (lambda_2*term2), which covers a lot of PDEs. |
This question has been asked before by someone else, but it stayed unanswered. I am wondering about the answer myself, as I was reusing the KvD.py code with a different equation and not getting the parameters.
The KdV.py example estimates two parameters : lambda_1 and lambda_2. Both are coefficients that go into the differential operator (F) as:
F = -lambda_1 U U_x - lambda_2 U_xxx
that is, lambda_1 multiplies the solution (U) and its first derivate (U_x), lambda_2 multiplies the third derivative (U_xxx). However, within the functions net_U0 and net_U1, the coefficient lambda_2 has the operation exp applied to it.
I mean, lambda_1 = self.lambda_1, but lambda_2 = tf.exp(self.lambda_2). Perhaps I am missing something. It should not be lambda_2 = self.lambda_2 instead?
The text was updated successfully, but these errors were encountered: