Here are my experiences with 3 days (more precisely 81 hours) of water fasting. I have been thinking about effects of abstaining from food for some time, but busy work schedule so far didn't allow me to actually try it out. Happily this Easter I had four free days, so I decided to try water fasting, which basically means that you only drink water and do not eat at all.
Here are my Twitter records in chronological order, translated from Latvian:
April 1, 2am (0h): Last meal (and drink) at party
April 2, 3pm (13h): As I have eaten non-stop for 32 years, I'll make a break to see that will happen. Next meal on Monday morning. Weight 88.9kg
April 2, 3pm (13h): I'm not interested in religious or detoxification aspects of water fasting, more curious about physiological and psychological effects.
April 2, 7pm (17h): No hunger so far, probably because ate a lot of cookies in party yesterday. Bought water made in Lithuania.
April 2, 10pm (20h): No desire to eat so far, only slight tiredness and little bit hard to concentrate. It seems I'm filled with energy.
April 3, 2am (24h): 24h without eating, I have gained 1kg of weight (probably because of Lithuanian water) :D The most interesting part will start tomorrow..
April 3, 8am (30h): still no hunger. Have I been eating just for habit so far?
April 3, 12pm (34h): my stomach makes some noises and I have slight desire to chew something (e.g. to eat apple), otherwise no special effects
April 3, 5pm (39h): finally I've started to fantasize how good it would be to eat pica :)
April 3, 6pm (40h): but first thoughts are about sweets and coffee. Life without stimulants seems little bit boring.
April 3, 9pm (43h): apathy, feeling sleepy
April 4, 2am (48h): some kind of unpleasant feeling and difficulty to fall asleep, but it's not really hunger. All hope for tomorrow..
April 4, 1pm (59h): finally long awaited effects of abstaining from food - total weakness and fantasies about waffles, sushi, yogurt and other food before falling asleep.
April 4, 1pm (59h): Theoretically I should now be in gluconeogenesis phase (http://bit.ly/bkAF0K). It seems I won't get to ketosis.
April 4, 4pm (62h): "total weakness" was a little bit over-exaggeration, "slight tiredness" would be more appropriate. And I want to feel hunger so I can eat to repletion tomorrow.
April 4, 10pm (68h): to not eat is simply boring, no other special effects after 3 days.
April 5, 2am (72h): sleeping and dreaming about various tasty foods.
April 5, 11am (81h): Finish. No significant feeling of hunger experienced, only slight tiredness and -2 kg. Drinking tea, later - bananas, yogurt, porridge.
Conclusions and observations:
* It was much easier than I expected
* No significant feeling of hunger (only before sleep)
* Almost no physiological effects (only slight tension in abdominal muscles and sometimes soreness in the mouth)
* Slight tiredness (mainly in mornings)
* Mental capabilities almost not affected (little bit harder to concentrate)
* Difficult to fall asleep (but it has always been problem for me)
* Lost 2kg (but I suspect it's mainly emptying of digestive tract, not real loss of fat), 88.9kg => 86.9kg
* Approx. 6-7 liters of water consumed
* Diminished sexual drive :)
* General feeling of boredom
* No elevation of mood or special feeling of well-being (probably it is associated with ketosis, but I didn't get so far)
* Probably marathon training gave some advantage (more efficient use of glycogen)
* Not much physical activity during last 2 days (only sleeping, sitting, little bit walking), therefore less need for energy
Sunday, April 4, 2010
Sunday, March 28, 2010
Evolution of flying paper sheets
For some time I was experimenting with artificial evolution (I'm also interested in natural one, but it's different story). While reading Pfeifer&Bongard "How the Body Shapes the Way We Think: A New View of Intelligence." I stumbled upon idea of using genetic programming to envolve not virtual organisms, but real objects. One example was waterpipe design where genetic algoritm runs on computer, but value of fitness function is determined by changing actual parameters of physical system.
The basic idea is that you don't need to implement complex physical reality simulation (air drag, turbulence etc.), but only relatively simple genetic algorithm.
To test the idea I devised following experimental setup - each "individual" is sheet of paper (37x194 mm), which is divided in fourteen folding lines:
Each folding line can be left untouched or folded inwards or outwards, thus there are 3^14=4'782'969 variants (actually some foldings which cancel each other are forbidden, so there are little bit less variants). Here are some examples:
The goal (fitness funcion) of each paper sheet is to fly as far as possible. It is measured by throwing each paper sheet from fixed height (around 2.20 m) and measuring horizontal distance to landing point:
Each sheet is thrown 3 times and average distance is calculated. Sheet is always thrown by holding it in fixed position (upper side of the sheet).
Genome encodes specific foldings as binary string, for example, binary string 10101110111110011111100 encodes following folding sequence: Out None None None In Out None Out None None Out Out In None. First bit determines, whether folding place #1 or #2 is folded at all (1=yes, 0=no), second bit determines, which of folding places is folded (in this case - first), third bit encodes folding direction (inwards or outwards) and so on...
Thus in this case genotype of each organism equals it's phenotype (appearance).
Then comes genetic algoritm, which has fairly standard steps:
1) generate initial random population
2) determine fitness function for each individual
3) create intermediate population by selecting best individuals (by using Stochastic universal sampling)
4) perform crossover by randomly choosing two individuals and exchaning their genome sequences at random point
5) perform mutations by randomly flipping bit (0=>1, 1=>0)
6) fold paper sheets, throw them from fixed height, measure distance and calculate fitness function value for each individual
7) repeat the procedure again and again for each generation
At first I tried setup with population size of 10 individuals and mutation rate p=0.01. Either because of small population size or incorrecty applied crossover procedure after 5 generations population was stuck in some local maximum and also folding sequences started to resemble each other too much.
Here is summary of results:
Generation 1 - max 66.5 cm - average 35.7 cm
Generation 2 - max 62.0 cm - average 36.5 cm
Generation 3 - max 71.5 cm - average 37.8 cm
Generation 4 - max 52.5 cm - average 37.1 cm
Generation 5 - max 53.0 cm - average 39.1 cm
Second try included population of 20 individuals and mutation rate p=0.02. Results were more promising:
Generation 1 - max 73.7 cm - average 27.2 cm
Generation 2 - max 119.7 cm - average 39.3 cm
Generation 3 - max 97.7 cm - average 51.8 cm
Generation 4 - max 123.3 cm - average 46.1 cm
Best far-flying individual (#161) with his "parents" above (#148 and #149) is shown here:
As we can see, winning design is quite simple - just fold the paper sheet diagonaly in middle and it will fly ~2-3 times further than most other sheets with elaborate designs.
Conclusions so far:
* Artificial evolution can find interesting designs, but probably the same result could be achieved by using different optimisation techiques.
* Have to test other methods for comparision.
* Good designs are not always retained after crossover and mutation (probably need to increase proportion of best individual in next generation).
* Folding many paper sheets "by hand" can be quite boring (some kind of semi-automated "embriology" should be considered in futher research).
* Foldings are not always perfect (different phenotypes resulting from equal genotypes).
* All of operations were performed "by hand" using Excel - no need for programming, but quite cumbersome. More automated approach should be considered in future.
* Variability of distances for each individual was not very large (standard deviation/average = ~0.77) meaning that designs really play role in flight distance. In other words, bad designs consistently had low-distance flight, while good designs consistently had high-distance flight in each of 3 trials.
* Larger population size is better (reduced tendency to local maximum), but probably 10 or 20 individuals are still not enough to get real power of artificial evolution and also number of generations might be larger.
* Mutation rate p=0.01 seemed too small, probably p=0.02 is more appropriate.
* No practical use so far (but it was interesting...).
* Lack of theoretical background (e.g. best mutation rates, population size etc.), but, of course, this is just proof-of-concept and doesn't pretend to prove anything...
The basic idea is that you don't need to implement complex physical reality simulation (air drag, turbulence etc.), but only relatively simple genetic algorithm.
To test the idea I devised following experimental setup - each "individual" is sheet of paper (37x194 mm), which is divided in fourteen folding lines:
Each folding line can be left untouched or folded inwards or outwards, thus there are 3^14=4'782'969 variants (actually some foldings which cancel each other are forbidden, so there are little bit less variants). Here are some examples:
The goal (fitness funcion) of each paper sheet is to fly as far as possible. It is measured by throwing each paper sheet from fixed height (around 2.20 m) and measuring horizontal distance to landing point:
Each sheet is thrown 3 times and average distance is calculated. Sheet is always thrown by holding it in fixed position (upper side of the sheet).
Genome encodes specific foldings as binary string, for example, binary string 10101110111110011111100 encodes following folding sequence: Out None None None In Out None Out None None Out Out In None. First bit determines, whether folding place #1 or #2 is folded at all (1=yes, 0=no), second bit determines, which of folding places is folded (in this case - first), third bit encodes folding direction (inwards or outwards) and so on...
Thus in this case genotype of each organism equals it's phenotype (appearance).
Then comes genetic algoritm, which has fairly standard steps:
1) generate initial random population
2) determine fitness function for each individual
3) create intermediate population by selecting best individuals (by using Stochastic universal sampling)
4) perform crossover by randomly choosing two individuals and exchaning their genome sequences at random point
5) perform mutations by randomly flipping bit (0=>1, 1=>0)
6) fold paper sheets, throw them from fixed height, measure distance and calculate fitness function value for each individual
7) repeat the procedure again and again for each generation
At first I tried setup with population size of 10 individuals and mutation rate p=0.01. Either because of small population size or incorrecty applied crossover procedure after 5 generations population was stuck in some local maximum and also folding sequences started to resemble each other too much.
Here is summary of results:
Generation 1 - max 66.5 cm - average 35.7 cm
Generation 2 - max 62.0 cm - average 36.5 cm
Generation 3 - max 71.5 cm - average 37.8 cm
Generation 4 - max 52.5 cm - average 37.1 cm
Generation 5 - max 53.0 cm - average 39.1 cm
Second try included population of 20 individuals and mutation rate p=0.02. Results were more promising:
Generation 1 - max 73.7 cm - average 27.2 cm
Generation 2 - max 119.7 cm - average 39.3 cm
Generation 3 - max 97.7 cm - average 51.8 cm
Generation 4 - max 123.3 cm - average 46.1 cm
Best far-flying individual (#161) with his "parents" above (#148 and #149) is shown here:
As we can see, winning design is quite simple - just fold the paper sheet diagonaly in middle and it will fly ~2-3 times further than most other sheets with elaborate designs.
Conclusions so far:
* Artificial evolution can find interesting designs, but probably the same result could be achieved by using different optimisation techiques.
* Have to test other methods for comparision.
* Good designs are not always retained after crossover and mutation (probably need to increase proportion of best individual in next generation).
* Folding many paper sheets "by hand" can be quite boring (some kind of semi-automated "embriology" should be considered in futher research).
* Foldings are not always perfect (different phenotypes resulting from equal genotypes).
* All of operations were performed "by hand" using Excel - no need for programming, but quite cumbersome. More automated approach should be considered in future.
* Variability of distances for each individual was not very large (standard deviation/average = ~0.77) meaning that designs really play role in flight distance. In other words, bad designs consistently had low-distance flight, while good designs consistently had high-distance flight in each of 3 trials.
* Larger population size is better (reduced tendency to local maximum), but probably 10 or 20 individuals are still not enough to get real power of artificial evolution and also number of generations might be larger.
* Mutation rate p=0.01 seemed too small, probably p=0.02 is more appropriate.
* No practical use so far (but it was interesting...).
* Lack of theoretical background (e.g. best mutation rates, population size etc.), but, of course, this is just proof-of-concept and doesn't pretend to prove anything...
Sunday, February 7, 2010
vOICe, day 3
Third training session with vOICe, today using artificial lighting. I can distinguish vertical bright lines and whether these lines occupy upper or lower part of vision field, e.g. I can tell that there is doorway (black rectangle with bright left-top-right borders). Also I can tell that I'm looking at cluttered scene, it gives melodic tones, e.g. my bookshelf has very distinctive la-la-la sound. Of course, it helps that I really know that I'm looking at, without this knowledge of my surroundings it would be much harder to tell that I "see". At the end of the session (~1 hour) I usually feel quite tired of all these sounds, it seems that brain needs rest to accumulate new information.
Conclusion so far - it is harder than I thought, nothing like "mental images" appears so far, I guess it's result of intensive training.
Conclusion so far - it is harder than I thought, nothing like "mental images" appears so far, I guess it's result of intensive training.
Saturday, February 6, 2010
vOICe, day 2
Another hour with vOICe, today in daylight. It meant that background was generally bright (and loud), making it harder to distinguish that I "see". Shades might have added another difficulties. For example, I was not able to distinguish doorways, because almost everything around them was bright. Nevertheless I tried to turn around few times to disorient myself and was able to more or less precisely establish my direction. It was easier in controlled settings, for example, I sat in front of bright wall and looked at my laptop (which is black), slowly raising or lowering it and listening to changes in sound. Generally X-axis is quite easy, it is much harder to get grasp of Y-axis, which is translated into pitch. If setting is simple, e.g. horizontal black laptop on bright background, pitch is constant along X-axis (time), but if laptop is turned 45 degrees, sound comes from lower to higher pitch and it becomes much more difficult to understand. Real cluttered scenes still seems completely meaningless. Another thing I tried was sitting in dark room and slowly opening doors, which gave bright line on dark background with very distinguishable sound. It was also possible to hear diagonal lines quite precisely.
I also changed the setup a little bit - instead of fixing webcam on bicycle helmet, I fixed it directly on forehead - it gave much better feeling of where I am looking.
Conclusion - I'm starting to get used to it, but still - nothing like "mental vision", although in some moments I had feeling that high sound in left ear corresponds to "something" in upper left side. And after taking off blindfolding glasses world seems so full of colors, so bright :)
I also changed the setup a little bit - instead of fixing webcam on bicycle helmet, I fixed it directly on forehead - it gave much better feeling of where I am looking.
Conclusion - I'm starting to get used to it, but still - nothing like "mental vision", although in some moments I had feeling that high sound in left ear corresponds to "something" in upper left side. And after taking off blindfolding glasses world seems so full of colors, so bright :)
Friday, February 5, 2010
vOICe, first impressions
While reading “Esseys Dedicated to the 50th Anniversary of Artificial Intelligence” I stumbled upon an article about sensory substitution, very interesting thing. The basic idea - it doesn't matter from which sensors (eyes, ears etc.) you get information, only structure of input data matters, e.g. if brain gets 2-dimensional information, it will construct 2 dimensional visual perceptions.
To try it out I downloaded simple program, called vOICe, intended for blind people, which converts visual data from web-cam into sound waves (soundscapes). Each image is scanned from left to right every second, brightness corresponds to loudness and Y-axis corresponds to pitch (e.g. bright object in lower part of "vision" field will corresponds to loud low sound).
I spent around an hour today, blindfolded, with web-cam on my bicycle helmet, trying to figure out that I "see". It was already dark outside, so I switched light on and off to get more contrast. Distinguishing light from darkness was quite easy, darkness was generally silent sound with some small clicks, brightness was much loader, like white-noise on TV. Seeing details was much more difficult (if not impossible yet), for example, I spent quite a long time, trying to "see" my doorway, turning head in all directions and trying to hear meaningful changes in soundscape.
More about this later...
To try it out I downloaded simple program, called vOICe, intended for blind people, which converts visual data from web-cam into sound waves (soundscapes). Each image is scanned from left to right every second, brightness corresponds to loudness and Y-axis corresponds to pitch (e.g. bright object in lower part of "vision" field will corresponds to loud low sound).
I spent around an hour today, blindfolded, with web-cam on my bicycle helmet, trying to figure out that I "see". It was already dark outside, so I switched light on and off to get more contrast. Distinguishing light from darkness was quite easy, darkness was generally silent sound with some small clicks, brightness was much loader, like white-noise on TV. Seeing details was much more difficult (if not impossible yet), for example, I spent quite a long time, trying to "see" my doorway, turning head in all directions and trying to hear meaningful changes in soundscape.
More about this later...
Friday, January 15, 2010
AI books worth reading
I did simple cluster analysis of ~1000 references from publications in “Esseys Dedicated to the 50th Anniversary of Artificial Intelligence”. It turned out that only 14 references are used 3 or more times (Levenshtein distance < 20)
Here are results:
Turing, A.M.: Computing machinery and intelligence. Mind 59, 433–460 (1950)Turing, A.M.: Computing machinery and intelligence. Mind 59(236)), 433–460 (1950)
Turing, A.M.: Computing machinery and intelligence. Mind (October 1950)
Turing, A.: Computing machinery and intelligence. Mind 59, 433–460 (1950)
=============================================
Chiel, H., Beer, R.: The brain has a body: adaptive behavior emerges from interactions of nervous system, body, and environment. Trends in Neurosciences 20, 553–557 (1997)
Chiel, H., Beer, R.: The brain has a body: adaptive behavior emerges from interactions of nervous system, body, and environment. Trends in Neurosciences 20, 553–557 (1997)
Chiel, H.J., Beer, R.D.: The brain has a body: adaptive behavior emerges from interactions of nervous system, body and environment. Trends in Neurosciences 20(12)), 553–557 (1997)
Chiel, H.J., Beer, R.D.: The brain has a body: Adaptive behavior emerges from interactions of nervous system, body and environment. Trends in Neurosciences 20, 553–557 (1997)
=============================================
Clark, A.: Being There: Putting Brain, Body, and World Together Again. MIT Press, Cambridge (1998)
Clark, A.: Being There – Putting Brain, Body, and World Together Again. MIT Press, Cambridge, MA (1997)
Clark, A.: Being there: putting brain, body and world together again. MIT Press, Cambridge (1997)
Clark, A.: Being There. Putting Brain, Body, and World Together. MIT Press, Cambridge (1997)
=============================================
Pfeifer, R., Scheier, C.: Understanding Intelligence. MIT Press, Cambridge (2001)
Pfeifer, R., Scheier, C.: Understanding Intelligence. MIT Press, Cambridge, MA (1999)
Pfeifer, R., Scheier, C.: Understanding Intelligence. The MIT Press, Cambridge, MA (1999)
Pfeifer, R., Scheier, C.: Understanding Intelligence. MIT Press, Cambridge, MA (1999)
Pfeifer, R., Scheier, C.: Understanding Intelligence. The MIT Press, Cambridge (1999)
Pfeifer, R., Scheier, C.: Understanding Intelligence. MIT Press, Cambridge (2000)
Pfeifer, R., Scheier, C.: Understanding Intelligence. MIT Press, Cambridge, MA (1999)
Pfeifer, R., Scheier, C.: Understanding Intelligence. MIT Press, Cambridge, MA (1996)
=============================================
Pfeifer, R., Bongard, J.: How the Body Shapes the Way We Think: A New View of Intelligence. MIT Press, Cambridge, MA (2007)
Pfeifer, R., Bongard, J.C.: How the Body Shapes the Way we Think – A New View of Intelligence. MIT Press, Cambridge, MA (2007)
Pfeifer, R., Bongard, J.: How the body shapes the way we think: a new view of intelligence. MIT press, Cambridge (2007)
Pfeifer, R., Bongard, J.C.: How the Body Shapes the Way we Think – A New View of Intelligence. MIT Press, Cambridge, MA (2007)
Pfeifer, R., Bongard, J.C.: How the body shapes the way we think: a new view of intelligence. MIT Press, Cambridge, MA (2007)
Pfeifer, R., Bongard, J.: How the body shapes the way we think: A new view of intelligence. MIT Press, Cambridge (2006)
=============================================
Lungarella, M., Metta, G., Pfeifer, R., Sandini, G.: Developmental robotics: a survey. Connection Science 15(4)), 151–190 (2003)
Lungarella, M., Metta, G., Pfeifer, R., Sandini, G.: Developmental robotics: a survey. Connection Science 15(4)), 151–190 (2003)
Lungarella, M., Metta, G., Pfeifer, R., Sandini, G.: Developmental robotics: A survey. Connection Science 15(4)), 151–190 (2003)
Lungarella, M., Metta, G., Pfeifer, R., Sandini, G.: Developmental Robotics: A Survey. Connection Science 15(4)), 151–190 (2003)
=============================================
Bongard, J., Zykov, V., Lipson, H.: Resilient machines through continuous self modeling. Science 314, 1118–1121 (2006)
Bongard, J.C., Zykov, V., Lipson, H.: Resilient machines through continuous selfmodeling. Science 314, 1118–1121 (2006)
Bongard, J., Zykov, V., Lipson, H.: Resilient machines through continuous selfmodeling. Science 314(5802)), 1118–1121 (2006)
Bongard, J., Zykov, V., Lipson, H.: Resilient Machines Through Continuous Self- Modeling. Science 314(5802)), 1118–1121 (2006)
=============================================
Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)
Sutton, R., Barto, A.: Reinforcement learning: An introduction. MIT Press, Cambridge,MA (1998)
Sutton, R., Barto, A.: Reinforcement learning: an introduction. MIT Press, Cambridge, MA (1998)
=============================================
Collins, S., Ruina, A., Tedrake, R., Wisse, M.: Efficient bipedal robots based on passivedynamic walkers. Science 307, 1082–1085 (2005)
Collins, S., Ruina, A., Tedrake, R., et al.: Efficient bipedal robots based on passivedynamic walkers. Science 307, 1082–1085 (2005)
Collins, S., Ruina, A., Tedrake, R., Wisse, M.: Efficient bipedal robots based on passive dynamic walkers. Science Magazine 307, 1082–1085 (2005)
=============================================
Brooks, R.A.: Intelligence without representation. Artificial Intelligence 47, 139– 159 (1991)
Brooks, R.A.: Intelligence without representation. Artificial Intelligence 47(1-3)), 139–160 (1991)
Brooks, R.A.: Intelligence without representation. Artificial Intelligence 47, 139– 159 (1991)
Brooks, R.A.: Intelligence without representation. Artificial Intelligence 47(47)), 139–159 (1991)
=============================================
Beer, R.D.: The dynamics of active categorical perception in an evolved model agent. Adaptive Behavior 11(4)), 209–243 (2003)
Beer, R.D.: The dynamics of active categorical perception in an evolved model agent. Adaptive Behavior 11(4)), 209–243 (2003)
Beer, R.: The dynamics of active categorical perception in an evolved model agent. Adaptive Behavior 11(4)), 209–243 (2003)
Beer, R.: The dynamics of active categorical percetion in an evolved model agent. Adaptive Behavior 11, 209–244 (2003)
=============================================
Lungarella, M., Pegors, T., Bulwinkle, D., Sporns, O.: Methods for quantifying the information structure of sensory and motor data. Neuroinformatics 3(3)), 243–262 (2005)
Lungarella, M., Pegors, T., Bulwinkle, D., Sporns, O.: Methods for quantifying the informational structure of sensory and motor data. Neuroinformatics 3(3)), 243–262 (2005)
Lungarella, M., Pegors, T., Bulwinkle, D., Sporns, O.: Methods for Quantifying the Informational Structure of Sensory and Motor Data. Neuroinformatics 3, 243–262 (2005)
=============================================
Lungarella, M., Sporns, O.: Mapping information flow in sensorimotor networks. PLoS Computational Biology 2, 1301–1312 (2006)
Lungarella, M., Sporns, O.: Mapping information flow in sensorimotor networks. PLoS Computational Biology 2(10)), 1301–1312 (2006)
Lungarella, M., Sporns, O.: Mapping Information Flow in Sensorimotor Networks. PLOS Computational Biology 2(10)), 1301–1312 (2006)
Lungarella, M., Sporns, O.: Mapping information flow in sensorimotor networks. PLoS Computational Biology 10, 1301–1312 (2006)
Lungarella, M., Sporns, O.: Mapping Information Flow in Sensorimotor Networks. PLOS Computational Biology 2, 1301–1312 (2006)
=============================================
Tedrake, R., Zhang, T.W., Seung, H.S.: Stochastic policy gradient reinforcement learning on a simple 3D biped. In: Proc. of 10th Int. Conf. on Intelligent Robots and Systems, pp. 3333–3338 (2004)
Tedrake, R., Zhang, T.W., Seung, H.S.: Stochastic policy gradient reinforcement learning on a simple 3D biped. In: Proc. of 10 th Int. Conf. on Intelligent Robots and Systems, pp. 3333–3338 (2004)
Tedrake, R., Zhang, T.W., Seung, H.S.: Stochastic policy gradient reinforcement learning on a simple 3D biped. In: Proc. of the 10th Int. Conf. on Intelligent Robots and Systems, pp. 2849–2854 (2004)
=============================================
CLUSTERS = 14
195x - 1
199x - 5
200x - 8
8x - Pfeifer, R., Scheier, C.: Understanding Intelligence. MIT Press, Cambridge, MA (1999)
6x - Pfeifer, R., Bongard, J.: How the Body Shapes the Way We Think: A New View of Intelligence. MIT Press, Cambridge, MA (2007)
5x - Lungarella, M., Sporns, O.: Mapping information flow in sensorimotor networks. PLoS Computational Biology 2, 1301–1312 (2006)
4x - Turing, A.M.: Computing machinery and intelligence. Mind 59, 433–460 (1950)
4x - Chiel, H., Beer, R.: The brain has a body: adaptive behavior emerges from interactions of nervous system, body, and environment. Trends in Neurosciences 20, 553–557 (1997)
4x - Lungarella, M., Metta, G., Pfeifer, R., Sandini, G.: Developmental robotics: a survey. Connection Science 15(4)), 151–190 (2003)
4x - Bongard, J., Zykov, V., Lipson, H.: Resilient machines through continuous self modeling. Science 314, 1118–1121 (2006)
4x - Brooks, R.A.: Intelligence without representation. Artificial Intelligence 47, 139– 159 (1991)
4x - Beer, R.D.: The dynamics of active categorical perception in an evolved model agent. Adaptive Behavior 11(4)), 209–243 (2003)
3x - Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)
3x - Collins, S., Ruina, A., Tedrake, R., Wisse, M.: Efficient bipedal robots based on passivedynamic walkers. Science 307, 1082–1085 (2005)
3x - Lungarella, M., Pegors, T., Bulwinkle, D., Sporns, O.: Methods for quantifying the information structure of sensory and motor data. Neuroinformatics 3(3)), 243–262 (2005)
3x - Tedrake, R., Zhang, T.W., Seung, H.S.: Stochastic policy gradient reinforcement learning on a simple 3D biped. In: Proc. of 10th Int. Conf. on Intelligent Robots and Systems, pp. 3333–3338 (2004)
Labels:
ai,
artificial intelligence,
cluster analysis
Subscribe to:
Posts (Atom)