Skip to content

Tutorial: Building Projects

StephenJRead edited this page Mar 22, 2021 · 56 revisions

Table of Contents

Steps in Building a Model

  1. Decide what you are going to model

  2. Design the structure of the Network

    1. What Layers do you need?

    2. How many layers?

    3. What are their names? Remember, you will use these names to define layers and in several other places to refer to those layers.

      1. You can pick the name you want, you do not need to use Input, Output, or Hidden, if you want to use more descriptive names.
    4. How many nodes in each layer and their shape.

      1. Number of nodes in Hidden layers and their level of inhibition will influence what they can represent.
    5. What kind of layer are they: Input, Hidden, Target (for Training) or Compare (for Testing, learning off).

    6. Update and add layer names in relevant places in program(described in following pages)

    7. How are the layers connected to other layers?

      1. Uni directional or bi-directional?

      2. If uni directional, which direction?

      3. Can you use Full projection or do you need specialized ones, such as one to one or receptive field ones?

  3. Can you use the default parameters or do you need to tweak some of them? Some of the most important are:

    1. Inhibition in a layer

      1. Need to reduce if you have small number of nodes.
    2. Learning rate on projections?

      1. Should the learning rate for the whole model be different from default?

      2. Or should just some of the projections be different from default?

      3. Should learning be on or off?

    3. Gain and threshold in the activation function for nodes or units.

    4. Conductances (excitatory and inhibitory) of nodes (function as gain)

    5. Weight scaling (absolute and relative) on the weights.

    6. Number of epochs.

    7. Number of runs.

    8. NZeroStop

  4. Design training and testing data.

  5. Create Training and Testing Data Tables to match the structure of your network

    1. Manual

      1. Copy an already created tab-delimited text file (tsv file), such as the one in the ra25 program and then manually edit Headers and entries.
    2. Automatic

      1. Run (or edit and then run) the ConfigPats function in ra25 example or described near end of this tutorial, that should create a tab-delimited text file (tsv) which you can then save and enter data into. Can extend to more layers. Currently what it does is it generate 25 rows of randomly generated data, with 5 X 5 Layers for both Input and Output. This can be modified as necessary.
  6. Modify Output Logs, if necessary, to add new layers or new variables that are not tracked in the example Tables.

  7. Modify GUI, if needed.

    1. Add Tabs for commands.

    2. Add Tabs for Plots

Sections of Code That You Are Most Likely to Want to Modify:

Helpful hints

Text editors and IDEs. Text editors, such as BBedit (Mac only), and Atom and Sublime Text (Mac, Windows and Linux), have the ability to easily find functions, as well as doing syntax coloring (as do the Full featured IDEs). Full featured IDEs such as Xcode, GoLand, Visual Studio Code and Gide (developed by O'Reilly) can be used to develop in Go, and Visual Studio Code, PyCharm and Gide can be used for developing in python. Gide was developed by Randy O’Reilly and was written in Go. You can get the code at https://github.com/goki/gide/ and compile it like any Go program. The other programs are commercial but most, if not all, have free versions.

Warning: Reading data and weight files. There are two ways you can configure your program to read data and weight files into an emergent project.

  1. You can read the data and weights in from an external file that is properly set up. See the Wiki for descriptions of how Training and Testing files should be formatted. There are separate instructions for how to format weight files. The description for how to create code for reading in data can be found in the Defining/Opening Training, Testing Data files section of this tutorial

  2. You can "embed" the data and weight files into the executable. The description of how to do that can be found in the Wiki at:

https://github.com/emer/emergent/wiki/Embed

Note: As of go version 1.16, embed functionality is built into the language definition and the technique in the Wiki is no longer required, although it should still work.

It is important to be aware that the ra25 example reads in data from an external file, whereas most, if not all, of the sims for the textbook embed the data and weights in the executable. This difference has critical implications for how you execute a project once it is compiled.

Executing a file with these different configurations for reading data and weight files

  • Read from external file: If you read data from external data files, as in the ra25 project, then when you execute the executable you need to type ./projectname at the command line in the Terminal window when you are in the relevant folder, so it knows to look for the data files in the current folder. Double clicking the executable will start the program, but it will not read in the data files, and typically when you Init the program it will crash.

  • Read from embedded data: If you have embedded the data in the executable, then you do not need the ./ . You can either type the projectname at the command line or double click on the binary/executable.

One other reminder: When you type go build filename.go at the command line when you are in the relevant folder it will build only that file. But many of the projects rely on multiple go files that need to be compiled into a single executable. If you compile only the single file, the build process will typically give you an error. To compile all the files in a folder into a single executable you should type simply go build which will compile everything into a single executable with the name of the main project file.

Comments at the Beginning of the File

Use comments at the beginning of the file to document what the program is doing.

Parameters for Network, Layers, and Projections

How to set the different parameters for nodes, layers, projections, and the Network

ParamSets

ParamSets is the default set of parameters -- Base is always applied, and others can be optionally selected to apply on top of that

Although this is the first major part of the program, this is not where I would typically start. I would first set up the simulation and configure the Network.

var ParamSets = params.Sets{

Below are all the defaults. (There is a button on NetView in compiled projects that shows both Default and nondefault Params for a program).

Following the defaults is the description of how to define parameters.

Regardless of whether you use type, class or specific name in referring to a layer or projection, you use the same statement to set parameters. See examples below, or in ra25 or various sims for the textbook.

Can use type, Class or specific name when defining parameters.

We adopt the CSS (cascading-style-sheets) standard where parameters can be specified in terms of the

  • Name of an object (e.g., #Hidden),

  • the Class of an object (e.g., .TopDown -- where the class name TopDown is manually assigned to relevant elements), and

  • the Type of an object (e.g., Layer applies to all layers). 

The Layer Names are defined as part of creating the layers in your network in the ConfigNet part of the program. Names of individual layers are always defined when you configure the network.

Projection Names are automatically named by the program by concatenating "SendingLayerName" + To + "ReceivingLayerName" (e.g., InputToHidden). Names of a Projection can also be explicitly defined in the ConfigNet part of the program.

The Class of an object (either Layer or Projection) can be defined in the ConfigNet part of the program. Note that you do not have to define a Class unless you need to refer to that Class later, such as in defining parameters.

An example from the bg.go project

Set different projections to be members of the same class

The following code sets the result of net.ConnectLayers to a variable pj and then uses SetClass to set different projections to the same class. Can then set unique parameters for all projections that are in that class with a single reference, instead of having to set the parameters for each projection independently.

	pj := net.ConnectLayersPrjn(inp, mtxGo, onetoone, emer.Forward, &pbwm.DaHebbPrjn{})
	pj.SetClass("MatrixPrjn")

	pj = net.ConnectLayersPrjn(inp, mtxNoGo, onetoone, emer.Forward, &pbwm.DaHebbPrjn{})
	pj.SetClass("MatrixPrjn")

	pj = net.ConnectLayers(inp, pfcOut, onetoone, emer.Forward)
	pj.SetClass("PFCFixed")

Example from attn.go project

Set layers to a class:

first set the result from net.AddLayer. to a variable, such as inp, ob1, etc. that has the value of the Layer.

	inp := net.AddLayer4D("Input", 1, 7, 2, 1, emer.Input)
	v1 := net.AddLayer4D("V1", 1, 7, 2, 1, emer.Hidden)
	sp1 := net.AddLayer4D("Spat1", 1, 5, 2, 1, emer.Hidden)
	sp2 := net.AddLayer4D("Spat2", 1, 3, 2, 1, emer.Hidden)
	ob1 := net.AddLayer4D("Obj1", 1, 5, 2, 1, emer.Hidden)
	out := net.AddLayer2D("Output", 2, 1, emer.Compare)
	ob2 := net.AddLayer4D("Obj2", 1, 3, 2, 1, emer.Hidden)

Then use SetClass to set the layers, such as ob1 and ob2 to be members of the same class.

Can then use this to set unique parameters that only apply to these Layers that are members of this class.

	ob1.SetClass("Object")
	ob2.SetClass("Object")
	sp1.SetClass("Spatial")
	sp2.SetClass("Spatial")
Set projections to a class when BidirConnectLayers is used.

The first variable in the assignment statement below, spob1, is assigned the value of the projection from sp1 to ob1 and the second variable is assigned to the value of the projection from ob1 to sp1. Then you can set each projection to a different class.*

Between pathways

	p1to1 := prjn.NewPoolOneToOne()
	spob1, obsp1 := net.BidirConnectLayers(sp1, ob1, p1to1)
	spob2, obsp2 := net.BidirConnectLayers(sp2, ob2, p1to1)
	spob1.SetClass("SpatToObj")
	spob2.SetClass("SpatToObj")
	obsp1.SetClass("ObjToSpat")
	obsp2.SetClass("ObjToSpat")

Projection names are automatically generated by the program by concatenating "SendingLayerName" + To + "ReceivingLayerName" (e.g., InputToHidden). Names of a Projection can also be explicitly defined in the ConfigNet part of the program.

.Back, .Forward and .Lateral are already defined Classes of projections, that are applied to a particular projection when the ConnectLayer command is used.

These are the general classes of parameters for Layer and Prjn.

These specify parameters for Layers and for the nodes within a Layer.

	Layer.Act.

	Layer.Inhib.

	Layer.Learn.

These specify parameters for Projections.

	Prjn.WtInit.

	Prjn.WtScale.

	Prjn.Learn.

Following example is from error_driven_hidden listing all the default parameters for the layers and projections in that model. The model has an Input layer, Hidden layer and Output layer. In the following example, only kept one example of a Layer and one of a Projection to simplify description, as they would have been duplicates.

Bolded options are ones that are probably of most interest to people.

/////////////////////////////////////////////////

Layer: Input

Act: {

XX1: { Thr: 0.5 Gain: 100 NVar: 0.005 VmActThr: 0.01 } // parameters of activation function

OptThresh: { Send: 0.1 Delta: 0.005 }

Init: { Decay: 1 Vm: 0.4 Act: 0 Ge: 0 } // Decay indicates proportion of activation that decays trial to trial

Dt: { Integ: 1 VmTau: 3.3 GTau: 1.4 AvgTau: 200 }

Gbar: { E: 1 L: 0.1 I: 1 K: 1 } // conductances, especially important are excitatory and inhibitiory

Erev: { E: 1 L: 0.3 I: 0.25 K: 0.25 }

Clamp: { Hard: true Range: { Min: 0 Max: 0.95 } Gain: 0.2 Avg: false AvgGain: 0.2 }

Noise: { Dist: Uniform Mean: 0 Var: 0 Par: 0 Type: NoNoise Fixed: true }

VmRange: { Min: 0 Max: 2 }

KNa: { On: false Rate: 0.8 Fast: { On: true Rise: 0.05 Max: 0.1 Tau: 50}

Dt: 0.02 } Med: { On: true Rise: 0.02 Max: 0.1 Tau: 200 Dt: 0.005 }

Slow: { On: true Rise: 0.001 Max: 1 Tau: 1000 Dt: 0.001 } } }

Inhib: {

Layer: { On: true Gi: 1.3 FF: 1 FB: 1 FBTau: 1.4 MaxVsAvg: 0 FF0: 0.1 } // whether inhibition is on in a layer, and the amount of inhibition.

Pool: { On: false Gi: 1.8 FF: 1 FB: 1 FBTau: 1.4 MaxVsAvg: 0 FF0: 0.1 } // same parameters for pools of units

Self: { On: false Gi: 0.4 Tau: 1.4 }

ActAvg: { Init: 0.5 Fixed: true UseExtAct: false UseFirst: true Tau: 100 Adjust: 1 }

}

Learn: {

ActAvg: { SSTau: 2 STau: 2 MTau: 10 LrnM: 0.1 Init: 0.15 }

AvgL: { Init: 0.4 Gain: 1.5 Min: 0.2 Tau: 10 LrnMax: 0.5 LrnMin: 0.0001 ErrMod: true ModMin: 0.01 }

CosDiff: { Tau: 100 }

}

///////////////////////////////////////////////////

Prjn: InputToHidden

WtInit: { Dist: Uniform Mean: 0.5 Var: 0.25 Par: 0 Sym: true}

// randomly initialized starting weights, shape of distribution, mean of distribution and variance of distribution.

WtScale: { Abs: 1 Rel: 1}

// whether weights are rescaled, Abs is absolute scaling of a weight, Rel is relative to other projections.

Learn: {

Learn: true Lrate: 0.04 LrateInit: 0.04} // whether Learning ison, learning rate, and starting value if changing learning rate over training.

XCal: { MLrn: 1 SetLLrn: true LLrn: 0 DRev: 0.1 DThr: 0.0001 LrnThr: 0.01 } // whether both Hebbian and error correcting learning are used. See the Paramset example below for how to set parameters for Hebbian, error-correcting learning and combined.

WtSig: { Gain: 6 Off: 1 SoftBound: true }

Norm: { On: false DecayTau: 1000 NormMin: 0.001 LrComp: 0.15 Stats: false }

Momentum: { On: false MTau: 10 LrComp: 0.1 }

WtBal: { On: false AvgThr: 0.25 HiThr: 0.4 HiGain: 4 LoThr: 0.4 LoGain: 6 }

}

// ParamSets is the default set of parameters -- Base is always
applied, and others can be optionally

// selected to apply on top of that

var ParamSets = params.Sets{

  {Name: "Base", Desc: "these are the best params", Sheets:params.Sheets{

	"Network": &params.Sheet{
		
		{Sel: "Prjn", Desc: "no extra learning factors",
			Params: params.Params{
				"Prjn.Learn.Norm.On": "false",
				"Prjn.Learn.Momentum.On": "false",
				"Prjn.Learn.WtBal.On": "false",
					}},
		
		{Sel: "Layer", Desc: "needs some special inhibition and learning params",
			Params: params.Params{
				"Layer.Learn.AvgL.Gain": "1.5", // this is critical! 2.5 def doesn't work
				"Layer.Inhib.Layer.Gi": "1.3",
				"Layer.Inhib.ActAvg.Init": "0.5",
				"Layer.Inhib.ActAvg.Fixed": "true",
				"Layer.Act.Gbar.L": "0.1",
						}},
						
	   	{Sel: ".Back", Desc: "top-down back-projections MUST have lower relative weight scale, otherwise network hallucinates",
			Params: params.Params{
				"Prjn.WtScale.Rel": "0.3",
		}},
		},
		}},

  {Name: "Hebbian", Desc: "Hebbian-only learning params", Sheets: params.Sheets{

	"Network": &params.Sheet{
	
		{Sel: "Prjn", Desc: "",
			Params: params.Params{
				"Prjn.Learn.XCal.MLrn": "0",
				"Prjn.Learn.XCal.SetLLrn": "true",
				"Prjn.Learn.XCal.LLrn": "1",
	}},
	},
	}},

{Name: "ErrorDriven", Desc: "Error-driven-only learning params", Sheets: params.Sheets{

	"Network": &params.Sheet{

	        {Sel: "Prjn", Desc: "",
			Params: params.Params{
				"Prjn.Learn.XCal.MLrn": "1",
				"Prjn.Learn.XCal.SetLLrn": "true",
				"Prjn.Learn.XCal.LLrn": "0",

	}},
	},
	}},

{Name: "Hebb+ErrorDriven", Desc: "Mixed Hebbian and error-correcting learning params", Sheets: params.Sheets{
	"Network": &params.Sheet{

	        {Sel: "Prjn", Desc: "",
			Params: params.Params{
				"Prjn.Learn.XCal.MLrn":    "1",
				"Prjn.Learn.XCal.SetLLrn": "false",
				"Prjn.Learn.XCal.LLrn":    "1",
 

Variables Listed for the Sim struct

	type Sim struct { 

Sim encapsulates the entire simulation model, defining which variables are part of the simulation data structure (struct). All functionality is defined as methods on this struct. This structure keeps all relevant state information organized and available without having to pass everything around as arguments to methods.

It also defines which variables show up in the control panel in NetView. Many of the variables listed here can be accessed or modified in the left hand panel in NetView, depending on settings. (note the view tags for the fields which provide hints to how things should be displayed). Definitions of view tags can be found at https://github.com/goki/gi and in the associated Wiki

If you modify an existing project, you probably don't have to do much here or with New, except for making sure you add any new layers you create or edit any layer name changes.

Look at this definition in various projects to see how visibility and ability to change a value from the GUI are indicated.

Need to define LayStatsNm here for the layers on which you record detailed statistics

// New creates new blank elements and initializes defaults

	func (ss *Sim) New() {

Need to give specific LayStatsNm here for layers on which you record detailed statistics

Configuring the Network

	func (ss *Sim) ConfigNet(net *leabra.Network) {

	  net.InitName(net, "PatAssoc")

This function has all the statements for setting up the layers and connections among layers.

Also provides positioning information for all the layers.

The following lines define the name of a Layer, its shape, its dimensions, and what type of Layer it is.

The result is assigned to the variables: inp, hid, and out, which can be used in the commands to connect the layers. Note that the variable names, such as inp, hid, and out are arbitrary. You can use other names. And you can have additional layers, with appropriate command.

	inp := net.AddLayer2D("Input", 1, 4, emer.Input)
	hid := net.AddLayer2D("Hidden", 1, 4, emer.Hidden)
	out := net.AddLayer2D("Output", 1, 2, emer.Target)

The line below simply assigns the function for creating a new Full projection (fully connects the connected layers) to the variable full, so that the entire function name does not have to be repeatedly typed out. Could do same thing for other kinds of Prjn types, such as OneToOne.

	full := prjn.NewFull()

First command below connects the input layer to the hidden layer, unidirectionally. Second command connects hidden and output layers bi-directionally.

	net.ConnectLayers(inp, hid, full, emer.Forward)
	net.BidirConnectLayers(hid, out, full)

If you want to create an inhibitory connection between two layers, then you would do it by using emer.Inhib in the statement for connecting two layers as follows:

        net.ConnectLayers(inp, hid, full, emer.Inhib)

The following is an example of a command that helps position the layers in the 3D Network view. Documentation on Github gives more detailed information on these commands. More detailed instructions can be found in the emergent/relpos/rel.go file in Github. Provides all the relevant options.

	hid.SetRelPos(relpos.Rel{Rel: relpos.Above, Other: "Input", YAlign:
	relpos.Front, XAlign: relpos.Left, YOffset: 1})

these commands build the network and initialize the weights to random starting values.

	net.Defaults()

	ss.SetParams("Network", false) // only set Network params

	err := net.Build()

	if err != nil {

	log.Println(err)

	return

	}

	net.InitWts()

	}

If you want to put Unit names on nodes, see Dogs and Cats project

Configuring a One to One Projection

The Full projection does not need to be configured as it automatically connects each node in the sending layer to each node in the receiving layer. However, other projection types, such as OneToOne do need be configured. For example, for OneToOne you need to specify the number of connections, the starting sending node, and the starting receiving sending node.

Other types of projections can be found in the prjn folder in:

https://github.com/emer/emergent

Definition of one to one projection

This is the definition of the parameters for a OneToOne projection taken from the emergent code found in the folder specified above: onetoone.go

OneToOne implements point-to-point one-to-one pattern of connectivity between two layers

    type OneToOne struct {

       NCons int `desc:"number of recv connections to make (0 for entire size of recv layer)"`

       SendStart int `desc:"starting unit index for sending connections"`

       RecvStart int `desc:"starting unit index for recv connections"`

}

Remember emergent indices always start at 0.

If needed, these three above parameters are defined in a program after one creates a NewOneToOne as in the example below

Setting up one to one projection in ConfigNet

First we define the layers for the network. Then we define the projections that are going to be used. Finally we connect the layers.

  func (ss *Sim) ConfigNet(net *leabra.Network) {
      net.InitName(net, "PatAssoc")
      inp := net.AddLayer2D("Input", 2, 11, emer.Input)
      hid := net.AddLayer2D("Hidden", 6, 6, emer.Hidden)
      pos := net.AddLayer2D("PosEval", 1, 1, emer.Hidden)
      neg := net.AddLayer2D("NegEval", 1, 1, emer.Hidden)
      out := net.AddLayer2D("Output", 1, 2, emer.Target)

      full := prjn.NewFull()

Below we create a new onetoone projection for the connection between a NegEval layer that consists of just one node, and an Output layer that consists of two nodes.

Because the single unit in NegEval is connected to just the first unit in the Output layer, one can just assume defaults. Also, because the first unit in NegEval is connected only to the first unit in Output (and vice versa) we can just connect with a BidirConnectLayers as we do in the following section where we connect the layers.

    neg2out := prjn.NewOneToOne()

Here we create a new onetoone projection for the connection between the PosEval layer and the Output layer. Because the single unit in PosEval is connected to the second unit in the Output layer, we cannot assume the defaults. So we use RecvStart to define the starting Receiving index (which is 1, the second unit).

Because the PosEval layer only has one node and the Output Layer has two and because the PosEval layer is connected to the second Output node, we need to define separate projections for the Forward and Backward direction, and use the ConnectLayers command for each direction, rather than using a BidirConnectLayers.

     pos2out := prjn.NewOneToOne()
     pos2out.RecvStart = 1

     out2pos := prjn.NewOneToOne()
     out2pos.SendStart = 1

    // Here we connect all the layers.
      net.ConnectLayers(inp, hid, full, emer.Forward)
      net.BidirConnectLayers(hid, pos, full)
      net.BidirConnectLayers(hid, neg, full)
      net.ConnectLayers(pos, out, pos2out, emer.Forward)
      net.ConnectLayers(out, pos, out2pos, emer.Forward)
      net.BidirConnectLayers(neg, out, neg2out)

Setting up Training and Testing Environment

func (ss *Sim) ConfigEnv() {

This is where you would config the training (and testing environment), giving information about training and testing tables.

Configure Training and Testing Trials

func (ss *Sim) ConfigEnv() {
    if ss.MaxRuns == 0 { // allow user override
        ss.MaxRuns = 10
    }
    if ss.MaxEpcs == 0 { // allow user override
        ss.MaxEpcs = 50
        ss.NZeroStop = 5
    }
   // sets Name, Description and the data file that the Training environment uses

    ss.TrainEnv.Nm = "TrainEnv"
    ss.TrainEnv.Dsc = "training params and state"
    ss.TrainEnv.Table = etable.NewIdxView(ss.Pats)

    // ss.TrainEnv.Sequential = true // Training is normally random, if you
    // uncommented, this would have training go sequentially through the
    // Training data, as in the srn.

    ss.TrainEnv.Validate()
    ss.TrainEnv.Run.Max = ss.MaxRuns // note: we are not setting epoch max -- do that manually

   

See ss project for example of how you would set up different testing patterns that are different from the training patterns.

 // sets Name, Description and the data file that the Testing environment uses

    ss.TestEnv.Nm = "TestEnv"
    ss.TestEnv.Dsc = "testing params and state"
    ss.TestEnv.Table = etable.NewIdxView(ss.Pats)
    ss.TestEnv.Sequential = true // insures testing goes sequentially through the Testing Data
    ss.TestEnv.Validate()

// note: to create a train / test split of pats, do this:

// all := etable.NewIdxView(ss.Pats)

// splits, _ := split.Permuted(all, []float64{.8, .2}, []string{"Train", "Test"})

// ss.TrainEnv.Table = splits.Splits[0]

// ss.TestEnv.Table = splits.Splits[1]

    ss.TrainEnv.Init(0)
    ss.TestEnv.Init(0)
}

Defining/Opening Training, Testing Data files

The following is modified to read external files and not stored Assets. This is taken from the err_hidden program and reads in three different data files. Uncommented code will read data from external tab-delimited text files. The OpenCSV commands are used to read data from an external tab delimited text file.

If you read data from external data files, then when you execute the executable you need to type ./projectname at the command line in the Terminal window when you are in the relevant folder, so it knows to look for the data files in the current folder. Double clicking the executable will start the program, but it will not read in the data files

NOTE: The commented code could be uncommented and used to read datafiles that are stored as Assets in the executable. If you used the OpenPatAsset commands you would comment out the OpenCSV commands.

If you embed the data into the executable, you would use the OpenPatAsset command to read the data from Assets stored in the executable. If you want to embed the data into the executable, there is a block of code that you can include in the folder from which you will create your executable. (called bindata.go and a version is found in the source code file for most of the projects). Instructions for how to do this can be found on the GitHub page at:

https://github.com/emer/emergent/wiki/Embed

func (ss *Sim) OpenPats() {

    // ss.OpenPatAsset(ss.Easy, "easy.tsv", "Easy", "Easy Training patterns")

    // ss.OpenPatAsset(ss.Hard, "hard.tsv", "Hard", "Hard Training patterns")

    // ss.OpenPatAsset(ss.Impossible, "impossible.tsv", "Impossible", "Impossible Training patterns")

    ss.Easy.OpenCSV("easy.tsv", etable.Tab)

    ss.Hard.OpenCSV("hard.tsv", etable.Tab)

    ss.Impossible.OpenCSV("impossible.tsv", etable.Tab)

}

Setting Up ApplyInputs to Ensure Correct Layers are Referenced

Need to make sure you give the proper layer names for training and testing when you create a new network. This goes in the string beginning with "lays"

// ApplyInputs applies input patterns from given environment.

// It is good practice to have this be a separate method with appropriate
// args so that it can be used for various different contexts (training, testing, etc).

func (ss *Sim) ApplyInputs(en env.Env) {

    ss.Net.InitExt() 
    
// clear any existing inputs -- not strictly necessary if always
// going to the same layers, but good practice and cheap anyway

// the for loop goes through the list of layers[lays] and takes the
// relevant data for that layer from the training or testing tables and
// applies it to the network before the program starts to let activation
// spread through the network.

    lays := []string{"Input", "Output"}

        for _, lnm := range lays {

           ly := ss.Net.LayerByName(lnm).(leabra.LeabraLayer).AsLeabra()

           pats := en.State(ly.Nm)

              if pats != nil {

              ly.ApplyExt(pats)

}

}

}

Modifying or Configuring Logs

For all logs there is a Config function that defines the structure of the table (Schema) and what variables will be recorded. Then there is a separate function that actually runs data collection to put data in the logs. In the following we first describe the function that writes data to a log table and we then describe the function that configures the log so that it has the proper structure for the intended data. We use the example of the TrainEpcLog, but the basic ideas generalize to the other logs.

This function first calculates a number of variables, such as the SSE for the current Epoch (EpcSSE), and then writes the relevant data to the TrainEpcLog.

Example: TrainEpcLog

// LogTrnEpc adds data from current epoch to the TrnEpcLog table.

// computes epoch averages prior to logging.

func (ss *Sim) LogTrnEpc(dt *etable.Table) {

     row := dt.Rows

     dt.SetNumRows(row + 1)

     epc := ss.TrainEnv.Epoch.Prv // this is triggered by increment so use previous value

     nt := float64(ss.TrainEnv.Table.Len()) // number of trials in view

     ss.EpcSSE = ss.SumSSE / nt

     ss.SumSSE = 0

     ss.EpcAvgSSE = ss.SumAvgSSE / nt

     ss.SumAvgSSE = 0

     ss.EpcPctErr = float64(ss.SumErr) / nt

     ss.SumErr = 0

     ss.EpcPctCor = 1 - ss.EpcPctErr

     ss.EpcCosDiff = ss.SumCosDiff / nt

     ss.SumCosDiff = 0

     if ss.FirstZero < 0 && ss.EpcPctErr == 0 {

         ss.FirstZero = epc

}

     if ss.EpcPctErr == 0 {

         ss.NZero++

       } else {

         ss.NZero = 0

}

For each column that has been specified in the Log Schema in the Config function that follows this example, this block of code writes the value to the cell corresponding to row for the current Epoch in the named Column (Run, Epoch, etc.). The code first goes through the list of variables (Run, Epoch, etc.). Then the following "for" loop loops through the list of Layers being recorded and writes the ActAvg value for that current layer. If you want to write other variables to this log you need to define the appropriate structure in the corresponding Config function following.

     dt.SetCellFloat("Run", row, float64(ss.TrainEnv.Run.Cur))

     dt.SetCellFloat("Epoch", row, float64(epc))

     dt.SetCellFloat("SSE", row, ss.EpcSSE)

     dt.SetCellFloat("AvgSSE", row, ss.EpcAvgSSE)

     dt.SetCellFloat("PctErr", row, ss.EpcPctErr)

     dt.SetCellFloat("PctCor", row, ss.EpcPctCor)

     dt.SetCellFloat("CosDiff", row, ss.EpcCosDiff)

for _, lnm := range ss.LayStatNms {

     ly := ss.Net.LayerByName(lnm).(leabra.LeabraLayer).AsLeabra()

     dt.SetCellFloat(ly.Nm+" ActAvg", row, float64(ly.Pools[0].ActAvg.ActPAvgEff))

}

// note: essential to use Go version of update when called from another goroutine

     ss.TrnEpcPlot.GoUpdate()

     if ss.TrnEpcFile != nil {

     if ss.TrainEnv.Run.Cur == 0 && epc == 0 {

         dt.WriteCSVHeaders(ss.TrnEpcFile, etable.Tab)

}

     dt.WriteCSVRow(ss.TrnEpcFile, row, etable.Tab)

}

}

Configuring a Log

This function configures the Train Epoch Log. If you want to record variables that are not in the example or project from which you are building this, or you want to delete variables, you need to edit this configuration function. It first gives all the meta data such as the table name and then defines the etable Schema, which defines the structure of the table. "sch" holds the table structure and gives the name of the column, and the data type that will be stored in it, such as INT64 or FLOAT64. The first part defines a column for each individual variable. It is then followed by a "for" loop that loops through the list of Layers in the network that you are recording (LayStatNms) and creates a variable for ActAvg for each layer (concatenates Layer name with Variable name), creates a column in the Log, specifies the data type, and appends this to the "sch" schema structure. The last command sets the Schema (structure) of the table from "sch".

The preceding function, LogTrnEpc, then records Epoch level data to this etable.

func (ss *Sim) ConfigTrnEpcLog(dt *etable.Table) {

     dt.SetMetaData("name", "TrnEpcLog")

     dt.SetMetaData("desc", "Record of performance over epochs of training")

     dt.SetMetaData("read-only", "true")

     dt.SetMetaData("precision", strconv.Itoa(LogPrec))

   sch := etable.Schema{

        {"Run", etensor.INT64, nil, nil},

        {"Epoch", etensor.INT64, nil, nil},

        {"SSE", etensor.FLOAT64, nil, nil},

        {"AvgSSE", etensor.FLOAT64, nil, nil},

        {"PctErr", etensor.FLOAT64, nil, nil},

        {"PctCor", etensor.FLOAT64, nil, nil},

        {"CosDiff", etensor.FLOAT64, nil, nil},

}

    for _, lnm := range ss.LayStatNms {

    sch = append(sch, etable.Column{lnm + " ActAvg", etensor.FLOAT64, nil, nil})

}

    dt.SetFromSchema(sch, 0)

}

Configuring the GUI

This is found in the ConfigGui function. You can probably just take the default for most of this, but if you are creating a new program, you will want to use this section to set (See below):

1) App Name in gi.SetAppName

2) Brief description of the project in gi.SetAppAbout

3) New Main Window and name in gi.NewMainWindow

This is also where you can add a bunch of commands to the toolbar, if you wish.

////////////////////////////////////////////////////////////////////////////////////////////

// Gui

// ConfigGui configures the GoGi gui interface for this simulation,

func (ss *Sim) ConfigGui() *gi.Window {

     width := 1600

     height := 1200

     // gi.WinEventTrace = true

     gi.SetAppName("ra25")

     gi.SetAppAbout(`This demonstrates a basic Leabra model. See <a
href="https://github.com/emer/emergent">emergent on
GitHub</a>.</p>`)

     win := gi.NewMainWindow("ra25", "Leabra Random Associator", width,
height)

Creating a Program That Can Run Without the GUI

If you wish to create a program that can run without the gui, you need to do the following things.

First, set up your main run function as essentially two separate functions, as below. The first function creates and configures the sim and then tests to see if there are commands on the command line, in addition to the command to start the project. If there are, it assumes that you want to run without the gui and it then calls the CmdArgs function, which will read and execute the commands. But if there are no additional commands, it calls the guirun function, defined below, which starts the gui. Second, you need to define the CmdArgs function, given below, which will be called for nogui mode.

func main() {

     TheSim.New()

     TheSim.Config()

   if len(os.Args) 1 {

     TheSim.CmdArgs() // simple assumption is that any args = no gui -- could
     add explicit arg if you want

}  else {

      gimain.Main(func() { // this starts gui -- requires valid OpenGL display connection (e.g., X11)

   guirun()

})

}

}

This is the function that will be called if you run the gui.

func guirun() {

	TheSim.Init()

	win := TheSim.ConfigGui()

	win.StartEventLoop()

}

This is the function that will be called if you run at the command line.

func (ss *Sim) CmdArgs() {

     ss.NoGui = true

     var nogui bool

     var saveEpcLog bool

     var saveRunLog bool

     var note string

     flag.StringVar(&ss.ParamSet, "params", "", "ParamSet name to use --
     must be valid name as listed in compiled-in params or loaded params")

     flag.StringVar(&ss.Tag, "tag", "", "extra tag to add to file names
     saved from this run")

     flag.StringVar(&note, "note", "", "user note -- describe the run params
     etc")

     flag.IntVar(&ss.MaxRuns, "runs", 10, "number of runs to do (note that
     MaxEpcs is in paramset)")

     flag.BoolVar(&ss.LogSetParams, "setparams", false, "if true, print a
     record of each parameter that is set")

     flag.BoolVar(&ss.SaveWts, "wts", false, "if true, save final weights
     after each run")

     flag.BoolVar(&saveEpcLog, "epclog", true, "if true, save train epoch
     log to file")

     flag.BoolVar(&saveRunLog, "runlog", true, "if true, save run epoch log
     to file")

     flag.BoolVar(&nogui, "nogui", true, "if not passing any other args and
     want to run nogui, use nogui")

     flag.Parse()

     ss.Init()

     if note != "" {

         fmt.Printf("note: %sn", note)

}

     if ss.ParamSet != "" {

         fmt.Printf("Using ParamSet: %s \n", ss.ParamSet)

}

     if saveEpcLog {

         var err error

         fnm := ss.LogFileName("epc")

         ss.TrnEpcFile, err = os.Create(fnm)

         if err != nil {

         log.Println(err)

         ss.TrnEpcFile = nil

}     else {

          fmt.Printf("Saving epoch log to: %v \n", fnm)

           defer ss.TrnEpcFile.Close()

}

}

      if saveRunLog {

          var err error

          fnm := ss.LogFileName("run")

          ss.RunFile, err = os.Create(fnm)

       if err != nil {

           log.Println(err)

            ss.RunFile = nil

}       else {

             fmt.Printf("Saving run log to: %v \n", fnm)

             defer ss.RunFile.Close()

}

}

      if ss.SaveWts {

           fmt.Printf("Saving final weights per run \n")

}

           fmt.Printf("Running %d Runs \n", ss.MaxRuns)

           ss.Train()

}

Configuring Input DataTables

Unlike the previous version of emergent, there is no wizard that can be used to create the various Training and Testing DataTables that the program uses, with the appropriate headers and structure. There are two possibilities for doing so.

  1. One possibility is to explicitly create the structure and column names for a datatable in a .tsv (tab delimited text file) in a program like Excel or modify an existing datatable. See examples of the sim programs, such as err_hidden, or ra25 for examples of the structure of a Training and Testing file, with appropriate headers that define the cell structure of the tables.

  2. Use a ConfigPats program, such as the one below, to define the structure of a datatable and the variables you wish to record. Also see example in ra25. In the ra25 program, the ConfigPats function is commented out under the Config function, so the current input data file in the folder will not change. However, if you uncommented the ConfigPats function, built the program again, and started the program, then it would create a new version of the input data file. So one could follow the example in the ra25 program and in your own program edit ConfigPats to build the structure of an initial data file to your specification, uncomment ConfigPats, build the program and then run the program, which should create a data table with the specified structure. Then one could add relevant Training and Testing data to this data table. Once you have created the data table, you would then comment out ConfigPats, build the program again, and when you run the program it should read in your new data.

func (ss *Sim) ConfigPats() {

     dt := ss.Pats

     dt.SetMetaData("name", "TrainPats")

     dt.SetMetaData("desc", "Training patterns")

// define name of Columns/Layers (Column in the datatable and corresponding Layer in the Network 
// need to have identical names), dimensions and shapes. 
// This piece of the Input and Output definitions: []int{5, 5} defines the dimensions of the layers
// Here it is defining layers that are 5 X 5, but you can make layers any shape that makes sense.

// Last element in this section, the 25 defines the number of rows to be generated in the DataTable.

// Can add additional Columns/Layers here
      dt.SetFromSchema(etable.Schema{

         {"Name", etensor.STRING, nil, nil},

         {"Input", etensor.FLOAT32, []int{5, 5}, []string{"Y", "X"}},

         {"Output", etensor.FLOAT32, []int{5, 5}, []string{"Y", "X"}},

         }, 25)

// following generates permuted binary rows and puts them in columns
// indicated by dt.Cols[n] (does not include Name Column). First number 
// (here 6, but could be something else ) indicates number of entries
// that are on, second number indicates on value, and third number
// indicates off value.

// Note that this is merely a way to create a datatable with the
// correct structure that can be edited in Excel or a similar program. 
// Could make other choices.

     patgen.PermutedBinaryRows(dt.Cols[1], 6, 1, 0)

     patgen.PermutedBinaryRows(dt.Cols[2], 6, 1, 0)

// this saves the resulting data table.

     dt.SaveCSV("random_5x5_25_gen.csv", etable.Tab, etable.Headers)

}

Creating a Scalar Value Layer, using popcode: turns single scalar value into distributed representation.

Encoding a Single Scalar Value into a Distributed Representation and Applying It to a Layer.

[Based on John Rohrlich's code for encoding scalar value layers (popcode)] This is a description for how to create a oneD scalar layer, e.g, 1 X N or N X 1. Other function for TwoD.

  1. Define Tensors for popcode layers in the Sim struct. Note that these are defined below code as Input Tensors. No reason they cannot be used to define a Target Tensor, with a different name. Also, don't have to define 2, can define as few or as many as you want.
    InOneTsr *etensor.Float32 `view:"-" desc:"for holding layer values"`

    InTwoTsr *etensor.Float32 `view:"-" desc:"for holding layer values"`
  1. Include Function to construct the Tensors, if they don't already exist. Define the desired dimension. Typical is 11 X 1, but could create larger layer, if desired. Only need to construct as many as you need. Note that InputTsrs is arbitrary name. Could use alternative name, depending on what the function is of the Layers you are using (e.g., ConfigEvalTsrs if you were using evaluation layers)
func (ss *Sim) ConfigInputTsrs() {

      if ss.InOneTsr == nil {

         ss.InOneTsr = etensor.NewFloat32([]int{11, 1}, nil, nil)

}

      if ss.InTwoTsr == nil {

         ss.InTwoTsr = etensor.NewFloat32([]int{11, 1}, nil, nil)

}

}
  1. Make sure the call to construct the tensors (e.g., ConfigInputTsrs() function) is called in the Config function.

  2. Define layer and dimensions in ConfigNet as you do for all other Layers in the Network. Typical dimensions are 11 X 1, but could be different. But need to match the dimension of the tensors you create. Use standard inhibition for Target layer. Layer name can be whatever you would like and typically different from name of tensor. This name is how you refer to Layer.

  3. 0 to 1 values that are the input data for the layer are in a Named column (Named with Layer name) in the training etable, in the first element of a two dimensional cell with the same size as the network layer. That is, if the network layer is 11 X 1, then the cell should have a 11 X 1 size

  4. Apply values to network, as in the modified ApplyInputs function below

    1. Read the 0 to 1 layer value from the etable.

    2. Encode the layer value as an n value tensor, the size of the layer.

    3. Apply tensor to the appropriate layer

Notes about code below

pc set to popcode.OneD{}.

pc.Defaults sets defaults for popcode functions; can be found in popcode.go file.

v := float32(pats.FloatVal1D(0)) 

[reads the first element in the Tensor in pats into float32 single value]

// Encode generates a pattern of activation of given size to encode given value.
// n must be 2 or more. pat slice will be constructed if len != n.
// If add == false (use Set const for clarity), values are set to pattern
// else if add == true (Add), then values are added to any existing,
// for encoding additional values in same pattern.


pc.Encode(&ss.InOneTsr.Values, v, 11, add) 

Encodes float32 value in v into an 11 element tensor (depends on size of layer) which is then applied to the relevant network. add is a boolean, which should be replaced with 'false' if values are to be set to the pattern, and replaced with 'true', if values are added to existing values. Would typically set add to 'false'. Encode needs the & prepended to indicate that this is a pointer to an underlying value.

Note that ly.ApplyExt uses a Tensor name without the & prepended as the value is not being modified.

Note that ly refers to the Layer

func (ss *Sim) ApplyInputs(en env.Env) {

     ss.Net.InitExt() // clear any existing inputs -- not strictly
necessary if always

// going to the same layers, but good practice and cheap anyway

     pc := popcode.OneD{}

     pc.Defaults()

     lays := []string{"InputOne", "InputTwo", "Output"}

     for _, lnm := range lays {

          ly := ss.Net.LayerByName(lnm).(leabra.LeabraLayer).AsLeabra()

          pats := en.State(ly.Nm)

     if pats != nil {

     if ly.Nm == "InputOne" {

           v := float32(pats.FloatVal1D(0))

           pc.Encode(&ss.InOneTsr.Values, v, 11)

           ly.ApplyExt(ss.InOneTsr)

}

     if ly.Nm == "InputTwo" {

           v := float32(pats.FloatVal1D(0))

           pc.Encode(&ss.InTwoTsr.Values, v, 11)

           ly.ApplyExt(ss.InTwoTsr)

}

     

     if ly.Nm == "Output" {

           ly.ApplyExt(pats)

}

}

}

}

Decoding from Scalar Value Layer to Single Scalar Value and Saving to Data File.

After training or testing, you then need to turn layer representation back into a single scalar value and save to a data file.

  • When saving to data file need to get value of layer from the Network and have it turned into a FLOAT64 single number that will be written to a log file.

  • popcode.Decode returns a FLOAT32, so need to convert to FLOAT64, as in following code.

The example below shows how to do this for a TstTrlLog.

Note that because this is a single FLOAT64 value, you will use a SetCellFloat command, not a SetCellTensor command to save the value to the Table.

Notes: When you define this in the DataTable schema in the Config function configuring the TstTrlLog, result should be set to an etensor.FLOAT64, with a Shape as nil.

To set up Decode need to do the following:

In the Sim struct, need to define a TmpVals to hold a slice of float32 ([ ]float32) as below (this is a temporary location).

  • This TmpVals will hold the result of reading the values from a scalar value layer so that it can then be turned into a single scalar value before it is written to an etable.
Type Sim struct {

…

     TmpVals []float32

…

}

Assuming that the sim is referred to as ss, then

ss.TmpVals is a slice of float32’s and the '&' in front of it provides UnitVals (see below) with a pointer (i.e. the address of the slice). UnitVals takes the specified value of the layer and writes it to ss.TmpVals. UnitVals needs the address so it can modify the values of the slice. Decode doesn’t need the address (the preceding &) because it isn’t going to change the values, it decodes it into another value. So for Decode you would just pass ss.TmpVals.

Code that would need to be in the function to write results to a Log, as in example below.

inp1 := ss.Net.LayerByName("InputOne").(leabra.LeabraLayer).AsLeabra()

// assigns this layer to the variable inp1, so it can be referenced below

……

pc := popcode.OneD{}

this simply allows you to refer to popcode.OneD{} by a shorter name, by assigning that function to the variable pc.

……

inp1.UnitVals(&ss.TmpVals, "ActM") // this writes Act value from the layer

// referenced by inp1 into the slice of floats.

iva1 := pc.Decode(ss.TmpVals) // this Decodes the slice into a single float32

dt.SetCellFloat("InAct1M", row, float64(iva1)) // this changes the float32 to float64 and writes to column in Log.

This block can be duplicated as necessary.

Here is a LogTstTrl and ConfigTstTrlLog pair that Configs the Log and then writes to the Log using Decode from the popcode (ScalarValueLayer) code.

// LogTstTrl adds data from current trial to the TstTrlLog table.

// log always contains number of testing items

func (ss *Sim) LogTstTrl(dt *etable.Table) {

     epc := ss.TrainEnv.Epoch.Prv // this is triggered by increment so use previous value

     inp1 := ss.Net.LayerByName("InputOne").(leabra.LeabraLayer).AsLeabra()

     inp2 := ss.Net.LayerByName("InputTwo").(leabra.LeabraLayer).AsLeabra()

     inp3 := ss.Net.LayerByName("InputThree").(leabra.LeabraLayer).AsLeabra()

     inp4 := ss.Net.LayerByName("InputFour").(leabra.LeabraLayer).AsLeabra()

     out := ss.Net.LayerByName("Output").(leabra.LeabraLayer).AsLeabra()

     pc := popcode.OneD{}

     trl := ss.TestEnv.Trial.Cur

     row := trl

     if dt.Rows <= row {

         dt.SetNumRows(row + 1)

}

      dt.SetCellFloat("Run", row, float64(ss.TrainEnv.Run.Cur))

      dt.SetCellFloat("Epoch", row, float64(epc))

      dt.SetCellFloat("Trial", row, float64(trl))

      dt.SetCellString("TrialName", row, ss.TestEnv.TrialName.Cur)

      dt.SetCellFloat("Err", row, ss.TrlErr)

      dt.SetCellFloat("SSE", row, ss.TrlSSE)

      dt.SetCellFloat("AvgSSE", row, ss.TrlAvgSSE)

      dt.SetCellFloat("CosDiff", row, ss.TrlCosDiff)

     for _, lnm := range ss.LayStatNms {

          ly := ss.Net.LayerByName(lnm).(leabra.LeabraLayer).AsLeabra()

          dt.SetCellFloat(ly.Nm+" ActM.Avg", row, float64(ly.Pools[0].ActM.Avg))

}

     ovt := ss.ValsTsr("Output")

// Decodes the scalar value layers into a float32 scalar value and then changes to 
// float64 and writes to the cell in the named column in the appropriate row.

      inp1.UnitVals(&ss.TmpVals, "ActM")

      iva1 := pc.Decode(ss.TmpVals)

      dt.SetCellFloat("InAct1", row, float64(iva1))

      inp2.UnitVals(&ss.TmpVals, "ActM")

      iva2 := pc.Decode(ss.TmpVals)

      dt.SetCellFloat("InAct2", row, float64(iva2))

      inp3.UnitVals(&ss.TmpVals, "ActM")
   
      iva3 := pc.Decode(ss.TmpVals)

      dt.SetCellFloat("InAct3", row, float64(iva3))

      inp4.UnitVals(&ss.TmpVals, "ActM")

      iva4 := pc.Decode(ss.TmpVals)

      dt.SetCellFloat("InAct4", row, float64(iva4))

      out.UnitValsTensor(ovt, "ActM")

      dt.SetCellTensor("OutActM", row, ovt)

      out.UnitValsTensor(ovt, "Targ")

      dt.SetCellTensor("OutTarg", row, ovt)

// note: essential to use Go version of update when called from another goroutine

      ss.TstTrlPlot.GoUpdate()

}

This configures the TstTrlLog so that it has the correct columns and the correct types for the Cells in the Log.

func (ss *Sim) ConfigTstTrlLog(dt *etable.Table) {

     out := ss.Net.LayerByName("Output").(leabra.LeabraLayer).AsLeabra()

     dt.SetMetaData("name", "TstTrlLog")

     dt.SetMetaData("desc", "Record of testing per input pattern")

     dt.SetMetaData("read-only", "true")

     dt.SetMetaData("precision", strconv.Itoa(LogPrec))

     nt := ss.TestEnv.Table.Len() // number in view

     sch := etable.Schema{

         {"Run", etensor.INT64, nil, nil},

         {"Epoch", etensor.INT64, nil, nil},

         {"Trial", etensor.INT64, nil, nil},

         {"TrialName", etensor.STRING, nil, nil},

         {"Err", etensor.FLOAT64, nil, nil},

         {"SSE", etensor.FLOAT64, nil, nil},

         {"AvgSSE", etensor.FLOAT64, nil, nil},

         {"CosDiff", etensor.FLOAT64, nil, nil},

}

      for _, lnm := range ss.LayStatNms {

         sch = append(sch, etable.Column{lnm + " ActM.Avg", etensor.FLOAT64, nil,
               nil})

}

      sch = append(sch, etable.Schema{

            {"InAct1", etensor.FLOAT64, nil, nil}, // defines columns in etable for

            {"InAct2", etensor.FLOAT64, nil, nil}, // the scalar values decoded from

            {"InAct3", etensor.FLOAT64, nil, nil}, // the layers and adds to Schema

            {"InAct4", etensor.FLOAT64, nil, nil},

            {"OutActM", etensor.FLOAT64, out.Shp.Shp, nil},

            {"OutTarg", etensor.FLOAT64, out.Shp.Shp, nil},

}...)

      dt.SetFromSchema(sch, nt)
}

test

Clone this wiki locally