Thursday 26 February 2015

Scale and theory

Introduction

As the knowledge of the world were performed, it has been established that the objects and physics systems are made of smaller pieces that drives their behaviour. When the theory and the calculus don't take in consideration the smallest parts that compounds the system it leaves out  a considerable amount of information, that implies a lose in predictions accuracy.

If "the last elements" are defined as the elements that contains the whole information of one physics system, then for one physics system that contains a certain amount of information, the lasts elements have to exist.

In a physics world exists magnitudes as velocity, charges, mass an so on. The amount of obtained information when we tag all the smallest particles of a system with its magnitudes, is less than we tag group of particles with their aggregate magnitudes.  That's why the "last elements" of a system, are also the smallest one.

Uncertainty

We define uncertainty in this text as the grade of deviation that one calculation within one theory has related to a real situation that it pretends to simulate. For example, a broker that uses charting methods in his investments has a high uncertainty, because the capacity that charting gives to a real economics prediction is null.

So, we can continue as follows:

  1. First, we calculate the T results that theory gives for all n possible physical circumstances.
  2. Second, we accede to a register that contains all the real results R for all possible physical circumstances.
  3. Third, using T and R  we calculate one indicator that gives a level of uncertainty of the theory. For example this one:



Strict causality

We understands "strict causality" when the theory has 0 uncertainty. A strict causality theory has to include the "last elements", in other case it would exist information out of model that is not being considered that affects the behaviour of the system. Today this last elements are considered the particles of the standard model, but the uncertainty of this model (whether Heisenberg uncertainty is fundamental or not) is not 0.

Partial causality

Its defines "partial causality" when the theory has uncertainty. In this group we have theories or models with low, medium or high uncertainties. We have theories that gives priceless knowledge with good approach to reality, theories with some predictable capacities, or theories as one call to astrologist.

Scale of aleatory


First we are going to consider whole experiments that one theory can emulate, which have R real results. For every theory which uncertainty is bigger than 0, and which pretend to calculate exactly R, there exists a digit where the calculus has the same uncertainty as choosing this digit by throwing one coin. There exist digits in one calculus that we can ignore because doesn't give us additional information. This is the scale of aleatory of one model. This scale would be bigger or smaller depending on how uncertain the model is.


If we make a graphic which shows the magnitude (T-R)/R for every n, the scale of aleatory is given by the 2 margins [+u,-u] that contains just the half of the points. This scale can be understood as the volume of phase space that one model is capable of delimiting with the probability of 50% being the results inside of it. This is the scale in one model leave to give us more information.

 Technical limits

In whatever moment the capacity of getting information from a system that we pretend emulate is limited. And the computational capacity to work on this information inside one mode is limited too.

Theories by scales

The theory that emulates the last elements behaviour, it can do it if the number of particles becomes too large, because the technical limits. There are one moment in when we need to propose new model that contents new definitions and new simplifications and norms to emulate a bigger system. We have to eliminate information per item, to increase the number of items that we can considerate in our computational model. The cutting of this information brings one increase in the uncertainty of the new model, necessarily about the state of the last elements but also about the state of the new conjugated elements.


As we see in the previous figure, the A theory is that works with last elements and the which contains the whole information of the system, but this theory is limited to a small amount of particles. In this moment we can use a B theory that includes some simplifications and is practical for a bigger amount of elements, because we have lowered the density of information per particle. We can imagine another theory C that follows B and so on.

Changing to a bigger scale theory

Imagine that the information that one theory works on, it has the form of a vector (a,b,c......n) from each of the individual elements. We can arrange this vectors into a matrix, where each row is a vector. One theory makes a transformation on the matrix giving another one, that corresponds with the future the past, and so on.


The matrix that contains all the information on the systems is the last matrix. To create a new arrangement of information to a bigger scale model, we combine the rows and columns to obtain the new matrix that the bigger scale theory works on. For example the new column called Temperature is and statistical combination about the velocities of the particles of one system. By this way, we get a much smaller matrix to work on.

The result of continue iteration in this way, is to create a handy matrix, with new parameters, in order to apply the knowledge for a more complex and exigent themes.

Every theory has is own matrix

We would be confuses to say that when we reach more and more generalist models, we need to abandon the combinations between rows and columns to create elements like "mountains, animals, wages", but today the voice recognition software works very well and is based in conjugation of basic physic information like pressure.

One enough argument for this is the next. Every physic system capable of being understood has its own last matrix, every imaginable concept only appears when the last matrix has some determinate data. We can make a collection of last matrix that contains that concept, and compose a equation that gives if one wide element like "dog" is present. So we can compose a correspondence between last matrix and things of the macroscopic world.


Saturday 31 January 2015

A fundamental definition of knowledge, What is knowledge?

We can say that knowledge is every bit that we can store in one computer, but this is not as fundamental as the definition that follows.

Knowledge is the information that permits win more that the random in any game.
The concept of game is wide and it is involved in all matter that contains decisions. Every game has different states or different distinguishable situations. The game runs from state to state, until reach one type of state, that is described as final state or result, that ends the game. This final state gives the result of the game; win, lose or some score. 

Every game permits to the players take decisions about what will be the state of the game after the decision is taken. The capacity of the player to change the future state of the game can be much or few, but never none.

If the players use the random to take the decisions, they will win the equivalent of the absolute ignorance. In this case they don´t use any knowledge. If the players uses some kind of information or criteria that drives to win more than the random taken decisions, then they are using knowledge and those criteria or information are knowledge. But if the players get less using some information than using a random criteria, then that information is negative knowledge. As we already know, is better be an ignorant than be wrong.

There are some kind of games that deal about science. In this games the player who wins is the player than better predicts the number that appears in one instrument.

Explained the basis, we can present the next properties of the knowledge.
  1. Two different criteria of the same game, that both of them gives more winnings that random playing, can originate different choices.
  2. One criteria can be unbeatable by another else.
  3. One criteria or knowledge don't have to be presented in formal way, it could be valid a abstract superstition.
  4. The random strategy could get the maximum winning in certain game.

Now, we can say that the fundamental not is that the knowledge has to be falsifiable, the fundamental is that the result of the knowledge in the game has to be compared to the result of the random. When a criteria or theory is presented, they have to have the property of being comparable to result of the random. Or to the results of another theory if we have a previous theory.

Some other relations

If we attend to a Shannon's information theory, then we can understand that one game have a bandwidth. This bandwidth is related to the amount of decisions that one player can take in each state of the game. One player who is more interesting in send a message instead of winning, could use this bandwidth to send it. This can remember us the story about a great chess masters whose ability got down when was hired by the secret services.

The application of one criteria to the game reduces the width of the bandwidth, maximal winning criterias can reduce this bandwidth completely.

¿Can very generalistic perceptions contain knowledge?

When the number of particles grow above certain calculus capacity, we are forced to use more and more generalistic models. Nevertheless in this generalistic levels, the knowledge persist in other forms. For example we have the Darwin and Wallace theory of evolution, biology, animal physiology and so on. Furthermore, when it deals about win more than random in complex issues, like sociology or phycology, nothing appears to prohibit informal criterias like "people run away from burning homes". Of course this is knowledge too.

Friday 30 January 2015

Presentation of site and objectives

My name is IƱigo Azcorra. I was born in 1983 and I am a material and industrial engineer. I live in a noble city of Bilbao in Spain. The purpose of this website is to share with the English speakers the thoughts and reflexions over a wide variety of questions about science and knowledge. I have been writing another web in Spanish since 2007 about the same subject, but certainly it can be reached to more public using English. I will translate my Spanish previos works and publish the new ones in both languages. At least it should be a good way to practice some english writing. So, I hope you can find the articles interesting. You are very welcome and you don't doubt about leaving a comment.