Neural Network Used For Climate Change Calculations!

I've built a neural network and trained it to extract climate forcing data. This includes the effects from the increase of the greenhouse gas CO2, the effects from various oceans sea indexes and the effects from a large number of solar and cosmic parameters.

This is only preliminary by now.

Here are the results when I matched it up against the derivate or the dynamic temperature signal for non terrestrial factors, when those factors represent the percentage of the forcing compared the combined forcing from ENSO and volcanic aerosols.

ENSO and volcanic aerosols 100%
Variation in Earth's rotation 68.5%
Solar wind speed 49.5%
Solar wind temperature 26.3%
Solar wind density 9.7%
Kp Magnetic Index 27.4%
Ap Geomagnetic index 13.3%
Sunspot number 4.8%
F10.7 radio flux.  4.0%
Interplanetary magnetic field IMF -2.6%
Neutron counter -> cosmic radiation -1.4%
TSI Total solar Irradiance -1.6%

     This result clearly shows that the sun has a much larger effect on the Earth's weather than what is included in climate models that are used by today's climatologists. This result is also statistically significant.

I have to stress that here I'm talking about weather related pulses which is entering into to the Earth climate system as this result is a match against the derivate (short time variations) in the global temperature signal. I will continue spending my time on the overall impact on the climate form the tide.
 



This result clearly shows that the sun has a much larger effect on the Earth's weather than what is included in climate models that are used by today's climatologists. This result is also statistically significant.

I have to stress that here I'm talking about weather related pulses which is entering into to the Earth climate system as this result is a match against the derivate (short time variations) in the global temperature signal. I will continue spending my time on the overall impact on the climate form the tide.

The temperature data that are used in this case is taken from satellites. I use the UHA MSU satellite data as this data to me seems to be the most objective.

This result further shows that huge impacts on the Earth's temperature is coming from other changes in the Sun's activity than from TSI, plus from changes related to Earth's rotation. The claims that the impact on the Earth weather from the Sun is primarily coming from changes in the TSI (The total solar irradiance) is clearly false as my result shows that the impact from changes in TSI is irrelevant. I should point out here that the input signal I use is not the real TSI impact on the Earth. Because the variation of the TSI signal is so small the variation from the yearly seasonal changes of the distance from the Sun, makes the real TSI signal a sinusoidal signal. I have ignored the sinusoidal component in these calculations.

One thing further to note is that no impact from cosmic rays as measured by the Oulu neutron counter has been detected.

The implication from this is that cosmic ray theory as suggested by Henrik Svensmark is weakened and that the mechanism suggested by, for example, Piers Corbyn at www.weatheraction.com gain credibility from this investigation

The mechanisms by which the Earth's weather and climate can be affected from this are discussed in the latter part in this video. This is from a talk held at EIKE by Prof. Dr. Vincent Courtillot.



Talk by Prof. Dr. Vincent Courtillot

From this result, I have tried to estimate the anthropogenic effect during the last decades from the temperature residue. My result indicate that there is an anthropogenic component of somewhere between 0.1 to 0.05 C/decade. By 2100, the anthropogenic forcing should be bellow 1 degree Celsius. Of course, some of this warming can also be caused by different ocean temperature cycles which is not connected with CO2.

In other words, the claims of dangerous human caused global warming has been significantly exaggerated.

So what is a Neural Network?

Neural Networks, which are also often, called Artificial Neural Networks (ANN) is a method in Artificial Intelligence. They are used to identify patterns by using a self learning algorithm. This algorithm tries to simulate how brain cells are believed to work and how the brain identifies patterns. In other words, Neural Networks use algorithms which tries to mimic nature.

Neural Network is today an established method that is in use, in many different types of applications. Among the many applications in use, you can find Optical Character Recognitions, mineral and oil prospecting, EGG heart diagnosis, forecasts of stocks and commodities and in robotic applications.

You can also find many other examples in various engineering applications and in academic research.



So how does it works?

The basic building block in a neural network is called a neuron.



This element is a typical neuron that is the basic building block in neural networks!

A number of input signals are multiplied with individual weights, which then are summed up together into the value G.





 
This is the transfer sigmoid function.

The sum G is converted in a transfer function either by some sort of step function or more often by an asymptotic transfer function. Usually through the sigmoid function.

The sigmoid function [Out= 1/(1 + e-G)] results has an output value which lies between 1 and 0.

Alternative one often use a balanced version of the sigmoid function
[Out = 2/(1 + e-G)  - 1] where the output value is between -1 and 1.

For the best results, the input values should be modified so that the maximum respective minimum values are within a range of -0.8 to 0.8. The weight values are initially randomly set to a value between -0.5 to 0.5

To get a neural network that gives meaningful results, you must first train it with input values against a known output value. This is the first phase, the training phase. The input signals, which are used, are often from time series or they can be measurements from sensors.

In my case, as input I use 120 neurons, which each consists of 5 input values. The input values are randomly selected from input data that are selected from months that are earlier than the output data. The output data represent the global temperature for a particular month. The month for the input data can be any month between 3 year previous and until the previous months against the month for the used output global temperature data. This gives a selection of 36 months, i.e. 3 years. Each input signal is divided up into 3 different signals representing a PID modifier. The only exception is the CO2 value as this is a monotonic increasing signal over time. PID = proportional, Integral and derivate. All in all, I'm able to use up to 51 different source data input when the PID modifier signal is included. So, for example, when I here talk about source input data, 3 of them are from Solar wind spend, one proportional value, one the integral value and one the derivate value. Each input value is then modified to span only a value from -0.8 to 0.8 so that they are adopted to be used in a sigmoid function. Then as input values to the neurons the program randomly picks a source input signal from a matrix of 36 (previous months) times up to max 51 (source signals).

Then the training session calculates the square rot of the total average error of the variance (which is the same as the standard deviation) between the actual and the real global temperature for each month beginning in December 1978 until the end of 2004. By changing the weights, this error is lowered over time. As the program algorithm causes lowering of the error and seeks a more optimal transfer function of the neural network, the program also calculates the error for the period between 2005 and up to 2011. This latter part is what is called the test section. The modifications of the weights are only based on the error of the training section. The is the period from December 1978 that ends at the end of 2004. The latter test period is not affecting the result in the transfer function of the Neural Network. Therefore, the test period is used to test that the transfer function of the Neural Network is still converging based on real correlations. The result from the training because of noise tends to create what is call over-fitting or over-optimization when the algorithm progresses and tries to minimize the error. When that happens the error of the test part instead of decreasing start to increase and the optimization is no longer based on real correlations but on noise and the process should stop. During this process, I save the weights which cause the smallest error in the test period, thus ensuring that I can further make analysis with the most optimal result.

Note: Whenever I refer to the term Error on the page, I'm talking about the standard deviation value between the calculated value and the measured value from satellites.



This is a plot of the size of the errors divided in the target/training error and from the test error part. The X-axis is the number of loops. As can be seen, the training error is continuously getting a lower value. In contrast, the test part reach a minimum just under 0.04 after about 12000 number of loops.



This graph shows the derivate value. The pink line is the real value as measured from satellite. The blue line is the calculated value from the Neural Network. As input signals into the Neural Network, in this case, I use the ENSO index, volcanic aerosol, various solar indexes, variations of the Earth rotation and cosmic radiation. The error is greater after 2004 which is from the test session, although this can be difficult to spot.



This graph display the real satellite temperature in pink. The red line is created from the previous derivate signal. This signal is a composite from the derivate signal, the CO2 increase, the PDO signal and the AMO signal. A damping exponential component is also applied.

The result one gets out of the training part, and one gets out of the testing part of a neural network can be compared with the structure of a hologram picture. A hologram picture, which is broken up in two parts, still shows the same hologram picture. This is because the object in the hologram picture is stored as phase information when light from a single light source hit the picture and can, therefore, still be visible in a broken up hologram photo. In a neural network, the underlining transfer functions based on the different weights for optimal results are the same. This why neural networks works and are so effective when it is used for observations which are based in physical relationships and mechanisms.

The most common algorithm used in Neural Networks is called Backpropagation. In Backpropagation, each calculation is divided into two pass. One is called forward pass where the error is calculated for one set of data. In my case, that would be for one month. The second part is called a backward pass where the weights are all updated based on the derivate from the calculated error and for that set of data.

In my case, Backpropagation does not work so well. This is because the input signals are from different sources with different characteristics and noise levels. This can cause the algorithm can be stuck in local minima.

Instead, what I do is I change individual weights randomly one at a time. The amount of change of this weight is then randomly picked within a fixed range either with a positive or negative value. The average variance value is then calculated for all sets of months in the training session. If this error is smaller that of the previous error this weight is kept, otherwise it is discarded. This method works quite well for me. The program is written in visual basic 6. VB6 programs can be compiled which makes it quite fast. I use PC which is 6 years old. Despite this, a run takes no more than 15-20 minutes. I'm sure that, on a more modern PC, it will take less than 5 minutes.

You can find much more information on Neural Networks, and how they work on other websites on the Internet.

Neural Networks have the unique capability to identify and extract correlations in noisy non linear systems in a way which standard statistical and frequency methods are unable to do. Therefore, I wanted to see if it is possible to identify correlations to climate signals using a Neural Network. I also wanted to see how well results I could get from a Neural Network working with such complicated system as the Earth's climate.

And, yes, I detected correlation, strong correlations to the global temperature. While one can argue if the global temperature is a well define value, it is not a quantity. Heat is a quantity. However It gives a reasonable good value of what is going, both with the temperature and climate. A better measure to use would be the Sea Surface Temperature SST, which I am going to investigate in the future.

Normally Neural Networks are used for forecasts. Here, I do not make forecast. Instead, I look for correlations. While doing this, I realized that by calculating the minimal error for each of the different input signals in isolation I could estimate the forcing factors from these signals. I used the results from ENSO and volcanic aerosols to make comparisons. The result is presented here as the percentage of forcing compared to the forcing from ENSO and volcanic aerosols.

By studying one signal at a time, I can calculate this forcing. In other words, I can quantify each signal's impact relative to the each other. However, note that some of the forcing can have the same source. For example, the 3 different solar wind data are caused by the same source. So there may be some overlapping.

The error in the test-part compared to the mean value is 0.045669. This is the value that is calculated if there is no forcing. Under the Error column, I list the error I received for individual signals when all other signals have been excluded. The error here is defined as the average variance error squared, i.e. the standard deviation.

y is the fraction error and x is forcing y = e-kx.

y = e-kx is the same as x = - ln(y)/k. Lets call (ENSO+volcanic aerosol) as ENSOva

The forcing as a percentage of F-ENSOva = 100 *(xv - xt)/(xENSOva - xt)

Where xt is ln(0.045669/0.045669)/k = 0

F-ENSOva(Error) = 100 * (ln(Error/0.045669) / ln (Error-ENSOva/0.045669))

Error-ENSOva = 0.037305


Factors
excluded :
Error Fraction %
Neutron
counter
0.045801 1.003 -1.4
Inter
planetary
magnetic
field IMF
0.045910 1.005 -2.6
Solar wind
temp
0.043300 0.948 26.3
Solar wind
density
0.044778 0.980 9.7
Solar wind
speed
0.041317 0.905 49.5
Kp
Magnetic
Index
0.043206 0.946 27.4
Sunspot
number
0.045232 0.990 4.8
Ap
Geomagnetic
index
0.044461 0.974 13.3
F10.3
radio flux
0.045305 0.992 4.0
variations in earth's rotation 0.039762 0.871 68.5
TSI 0.045815 1.003 -1.6
PDO 0.046751 1.024 -11.6
AMO 0.043650 0.956 22.4
SOI 0.041625 0.911 45.8
SST 0.036543 0.800 110.2
ENSO 0.037305 0.817 100.0



Solar factors other than TSI are very significant, at least for short term temperature variations and can not be ignored. They must be included in climate models if they are to gain credibility. In fact, these factors are nearly as important as the ENSO index, at least for short term temperature variations.

The 5 dominating short term non sea factors based on these results are variations in Earth's rotation, The Solar wind speed, solar wind temperature, Kp Magnetic Index, Ap Geomagnetic index. However, there is no correlation with galactic cosmic radiation as there is no correlation to the Oulu neutron counter.

Note! I discovered that the random seed I used was always the same. After fixing this with the VB6 command, Randomize, which creates a unique seed for the random function, I re-calculated the error this time by calculate individual signals one at a time. With individual signals, I included proportional, derivate and the integral value. I added to this program a function which calculates this value 10 times. By doing this, I am now able to get better statistical values within a smaller spread.

Here is an example of calculated error for the solar wind speed.


0.041262
0.041359
0.041391
0.041389
0.041201
0.041367
0.041351
0.041356
0.041283
0.041217

As you can see the statistical variation of the error is small, thus the data is statistic significant.

Other things to note: When the forcing is negative, then that indicates that there is no correlation detected, and noise has taken over. A curious thing is that the PDO Pacific Decadal Oscillation show no correlation the global satellite temperature record while AMO Atlantic Multidecadal Oscillation does. There is a difference in how these calculations are defined. AMO is defined as the sea surface temperate variability in the North Atlantic. In contrast, the calculations of PDO are more complex. As I understand it, PDO is based on sea temperature anomalies in parts of northern pacific after the global temperature trend is removed from this value. This is what explains that correlation to the global temperature value is absent.

I am going to follow the practices by climatologists and keep the software and the data I use confidentially. By now! The reason is that I have not documented the software. In fact, when I change parameters I make changes in the software and recompile it. I plan to make this software more general purpose and re-make so that apart from examining relation to global temperature record this program can be used to look for correlation in relation to Sea Surface Temperature SST, the ENSO signal and the SOI. Note: I have now examined the ENSO signal. I discovered that the ENSO signal is driven in large part by the tide.

When I have documented the software and made this software more general purpose and transparent I want to make it free and downloadable so that anyone can use it and see for themselves that the result I get is real and that the Sun play I much more prominent role than the climate community would ever admit. I think this is very important because the projection and forecast made by these people are dishonest or the results are based on work of ignorance. I would add that I'm not paid by anyone, and I do this on a voluntary basis.

As far as I know no other studies using Neural Networks has been made in order to try to evaluate climate forcing factors which is quite surprising because Neural Network is an effective method to extract correlations from a chaotic and non linear system, such as the climate system. Besides it is not that complicated to do.

If I were a climatologist with the agenda of finding the truth and not working for the global warming cause, then there would be two fields I would like to investigate.

1. I would look at the ionization and electric properties in the upper atmosphere its possible influences on variations on the top of clouds.

2. I would look at changes in pressure at different latitudes and correlate them against the angle between the equator and the changing in the plane of the Moon's orbit.

Let me explain. The interaction with the solar wind, the amount of UV radiation and the magnetic activity of the upper atmosphere leads to variations in the amount of ionization. This would result in variations in the electric charge in that part of the atmosphere. It is reasonable to assume that the variation in the electric charge also affects the cloud. This has been studied by professor Brian A. Tinsley at University of Texas, Dallas. Electric and solar forcing on the climate

It has recently been demonstrated that there have been large variations over time of the average heights of clouds. The cloud heights globally have been declining during recent years. Declining heights of clouds! Lowering of the cloud tops cools the Earth.



Video explaining that the heights of clouds has been declining. Plus a little bit of AGW propaganda.

Because the plane of the Moon's orbit varies relative the Earth's equator following the procession of moon nodes, with a periodicity of 18.6 years, the tidal effects from the moon on the atmosphere also varies. This variation has not only a longitudinal effect, but which is more important, it has also a latitudinal effect. When the angle between the plane of the moon's orbit and Earth's equator is high, air and water are moving from the equator closer to the poles. This should have an effect on the atmosphere causing disturbances in the jet stream.



 

Please come back as I will update this page over time when I get more results!

 
Need any help with Artificial Neural Network applications?

Artificial Neural Networks are used in many applications but applying ANN to examine correlations and for forecast in climate science is a special case. The reason for this is that while the output signal may be noisy, the internal processes are based on strong physical mechanism based on thermodynamics and fluid dynamics. Because the output is composed of several different mechanisms, and the output is a response with a multitude of time lags, the output becomes noisy. The unique feature of ANN is its ability to resolve these internal interrelationships. As an independent consultant, I'm interested in assisting in doing work in this area. However it does not need to be in the area of climate science.