Saturday 31 January 2009

Random number and Gaussian random number algorithm / code / generatior in C++

Actually, I loose few hours to search the code for random number generator which follows the Gaussian distribution with given mean and variance. Hence I am posting my final google and some hard work's result because I think this will help you to save more hours.

C++ Code (I have used Microsoft Visual Studio 2005). PLEASE PAY ATTENTION IN "stdafx.h" IF YOU ARE NOT USING Microsoft Visual Studio 2005:


// GussianDistributionGeneratior.cpp : Defines the entry point for the console application.
// Author: Rudra Poudel

#include "stdafx.h"
#include<stdio.h>
#include<stdlib.h>
#include<conio.h>
#include<math.h>
#include<float.h>
#include<limits.h>
#include<time.h>

//Random generator parameters
#define IA 16807
#define IM 2147483647
#define AM (1. /IM)
#define IQ 127773
#define IR 2836
#define NTAB 32
#define NDIV (1+(IM-1)/NTAB)
#define EPS 1.2e-7
#define RNMX (1.0-EPS)
long idum;

float getRand(long *idum);
float getGaussianRand(long *idum, double mean, double sigma);

int _tmain(int argc, _TCHAR* argv[])
{
idum= -time(0);

printf("Random Numbers:\n");
for(int i=0;i<100;i++){
printf("%f\t",getRand(&idum));
}
printf("\n\nPress any key to continue...");
getche();

printf("\n\nGaussian Random Numbers with mean=0 and variance=1:\n");
for(int i=0;i<100;i++){
printf("%f\t",getGaussianRand(&idum,4,1));
}
printf("\n\nPress any key to continue...");
getche();

return 0;
}

//Below function follows the Random Number Generators from Numerical Recipes in C, v2.0 (C) Copr. 1986-92 Numerical Recipes Software

/************************************************************************
* name: getRand & getGussianRand
* description: random generators from
* input: idum: Seed for random number generator
* output: Random number between 0-1 range
************************************************************************/
float getRand(long *idum)
{
int j;
long k;
static long iy=0;
static long iv[NTAB];
float temp;

if (*idum <= 0 || !iy) {
if (-(*idum) < 1) *idum = 1;
else *idum = -(*idum);
for (j=NTAB+7; j>=0; j--) {
k = (*idum)/IQ;
*idum=IA*(*idum-k*IQ)-IR*k;
if (*idum <0) *idum += IM;
if (j < NTAB) iv[j] = *idum;
}
iy = iv[0];
}
k = (*idum)/IQ;
*idum=IA*(*idum-k*IQ)-IR*k;
if (*idum < 0) *idum += IM;
j = iy/NDIV;
iy=iv[j];
iv[j] = *idum;
if ((temp=AM*iy) > RNMX) return RNMX;
else return temp;
}

/************************************************************************
* name: getGussianRand
* description: Gaussian random generators with given random number generator's seed, mean and variance
* input: idum: Seed for random number generator
* mean: mean for the distribution; for standard normal distribution mean = 0
* sigma: variance (square of standard deviation) for standard normal distribution sigma=1. As must of the case people need Gaussina Random number with mean =0 and sigma=1, hence if you are not sure just use that or plot the points and observer with different values.
* output: A random value form gussian distribution with given parameters
************************************************************************/
float getGaussianRand(long *idum, double mean, double sigma)
{
float getRand(long *idum);
static int iset=0;
static float gset;
float fac,rsq,v1,v2;

if (iset == 0) {
do {
v1=2.0*getRand(idum)-1.0;
v2=2.0*getRand(idum)-1.0;
rsq=v1*v1+v2*v2;
} while (rsq >= 1.0 || rsq==0.0);
fac=sqrt(-2.0*log(rsq)/rsq);
gset=v1*fac;
iset=1;
return ( (v2*fac*sigma) + mean);
} else {
iset =0;
return ( (gset*sigma) + mean);
}
}



As I am running out of time so I am unable to write description and theory behind it. Please don't hesitate to drop comment or question if you find something strange!!!

Tuesday 27 January 2009

Analysis of Continuous-Time Recurrent Neural Network

Recent year’s research shows that Continuous-Time Recurrent Neural Network (CTRNN) is important for modelling dynamic behaviour of the system. Hence detailed analysis and understanding of CTRNN is important. As such, attempt is made in this report to analyse the CTRNN equation for single node and double nodes. Basically, I would be experimenting and analysing how individual parameter of the CTRNN affects the output of the system, and I would also try to report the relationship between different parameters as well. Finally, I would be discussing the stability of the CTRNN with respect to the different values of its parameters.

Download full pdf text

Hebbian Learning using Fixed Weight Evolved Dynamical ‘Neural’ Networks

In connectionist artificial neural network, the followers of the Hebbian learning (Hebb 1949) strongly believe that activities between two nodes can be increased or decreased by changing the connection weight between them. However, we claim and prove that learning can be produced without changing the connection weight in dynamic neural network as we believe that learning is produced by interaction with dynamic environment. To prove this I have reworked on research work of Izquierdo & Harvey (2007). However, I have achieved low fitness than their and my best evolved 4-node circuit is not as good as their in task achievement. I have used evolutionary approach to synthesize Continuous-Time Recurrent Neural Network parameters. The experimental methodology and output of the experiment have been described in details.

Download full pdf text

Thursday 22 January 2009

AquaLogic Collaboration DB connection fail with SQL Server

If you are using SQL Server as a backend for AquaLogic or Plumtree, it's diagnostic test will be fail due to schema name mismatch in configuration file and SQL Server's schema name for the Collaboration DB. To correct this problem, we need to edit the database.xml files by following steps,

- find database.xml files on AquaLogic installation directory [eg. C:\bea]

- change DB_ali_Name.DB_User_Name.'s [DB_User_Name] to [dbo]

After that diagnostic should be success as well as system will work fine. Don’t forget to restart services after xml file changes.

Action Selection using Reinforcement Learning

Success of adaptive autonomous agent is evaluated by how well it could perform the desired task in dynamic environment, i.e. selection of appropriate action even if circumstances and environment changes. Agent cannot select appropriate action in random as well as in sequence because the environment in which agent acts are dynamic in nature. Hence, action selection always remains a central question, especially in the field of adaptive autonomous agents that can function robustly and efficiently in complex and dynamic environment (Blumberg 1994, p.22). Now in this situation, reinforcement learning, getting feedback (negative or positive) of performed action has very important role in decision making of future action selection with the help of present experience. Hence, though there are few difficulties in implementation of concept of reinforcement learning, it is still the first choice in adaptive autonomous agent building. Especially in this essay I will describe how reinforcement learning helps agent for efficient action selection in dynamic environment and I will also highlight the importance of reinforcement learning in action selection as well as its limitations.

Download full pdf text