Lab7: Spiking data processed by Nengo Neurons#
The objectives of this lab are:
To understand the format of event-based data
To understand how event-based data is read by Nengo neurons
Specifications
◻ Filename changed to reflect last name of person submitting assignment
◻ Code runs error free
◻ Jupyter notebook is saved such that outputs appear upon opening file (and therefore on gradescope)
◻ (On your honor) All markdown and resources read thoroughly and fully understood
◻ AER is in proper format, either created by CSV or a python array
◻ Video generated from AER is at least 50x50 pixels, 10 frames
◻ Video generated from AER can be displayed outside of Nengo
◻ Video generated from AER can be read by Nengo neurons
◻ Video generated from AER can be displayed from Nengo simulation output spikes
◻ Video displayed from Nengo simulation output spikes can be clearly interpreted
Set up#
Ensure you are using your 495 Virtual Environment before you begin!
Then, import Nengo, and other supporting libraries into your program to get started:
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import HTML
from matplotlib.animation import ArtistAnimation
import nengo
import pandas as pd
Build your spiking data (i.e. your AER)#
Your goal is to create an interesting spiking video with a *.csv file just like ILAN City Frame Data.csv
OR with an event array (therefore bypassing the need for this line of code: events = csv_to_event_array(csv_filename, start_time, end_time)
).
Most interesting video gets a prize - have fun with this!
.csv file
Take a look at the ILAN City Frame Data.csv
file. You will see in that file that you have x- and y-coordinates at which an event took place, a polarity of said event, and the time of said event. Your video must be at least 50x50 pixels and contain 10 frames (after you’ve read your data over windows of time). If you would like for your video to have both positive and negative spiking events, use polarities of 1 and 0, respectively.
events_array
The events array contains a list for every event consisting of the following values:
y
: The vertical coordinate of the event.x
: The horizontal coordinate of the event.p
: The polarity of the event (0 for off, 1 for on).t
: The event timestamp in microseconds.
You can see the format of this list - [(y, x, p, t)]
for each event - within the csv_to_event_array
function. You then fill the final events_array
with np.array(events_list, dtype=[('x', 'i4'), ('y', 'i4'), ('p', 'i4'), ('t', 'i4')])
.
Notes
You are welcome to add in negative polarity, but only if it’s a trailing edge! Don’t try to have “color” using negative polarity. Think of it more as a shadow to your positive polarity (leading edge).
You will see a spot to build a CSV or to build an array. CHOOSE ONE. Comment out / delete the other.
Do NOT mess with the
csv_to_event_array
function
def csv_to_event_array(csv_filename: str, start_frame: int, end_frame: int) -> np.ndarray:
# ======= DVS camera - Physics dept ===========
df = pd.read_csv(csv_filename, names=['x', 'y', 'p', 't'])
# set initial time to 0 for Nengo simulator to run data right away
sub_df = df[(start_frame <= df['t']) & (df['t'] <= end_frame)]
sub_df['t'] = sub_df['t'] - sub_df['t'].iloc[0]
events_list = [(y, x, p, t) for x, y, p, t in sub_df.values]
events_array = np.array(events_list, dtype=[('y', 'i4'), ('x', 'i4'), ('p', 'i4'), ('t', 'i4')])
return events_array
####################################### CHOOSE ONE ######################################
#----------------Build a CSV--------------------
# Your data!
csv_filename = '???.csv'
start_time = ???
end_time = ???
events = csv_to_event_array(csv_filename, start_time, end_time)
print("Successfully read %r" % csv_filename)
#----------------Build an Array------------------
# Your data!
events_list = ???
#[(x_loc1, y_loc1, polarity1, time event1 occurred),
# (x_loc2, y_loc2, polarity2, time event2 occurred),
# ...,
# (x_locN, y_locN, polarityN, time eventN occurred)]
events = np.array(events_list, dtype=[('x', 'i4'), ('y', 'i4'), ('p', 'i4'), ('t', 'i4')])
start_time = ???
end_time = ???
View the data (NOT using neurons yet)#
You will need to adjust your image size. Depending on when you placed your events, you may need to adjust your dt_frame_us to determine the number of frames you will have in your video. For now, I’m leaving the time parameters as they were in our tutorial.
This is your chance to ensure your video is doing what you thought it would!
# ======= Your Data! ===========
img_height = ???
img_width = ???
t_length_us = end_time - start_time #value is in microseconds
t_length_s = t_length_us * 1e-6 #convert prior value to seconds
dt_frame_us = (10 * 1e-3) * 1e6 #this value is also in microseconds
t_frames = dt_frame_us * np.arange(int(round(t_length_us / dt_frame_us))) #number of frames in video
# ==============================
fig = plt.figure()
imgs = []
for t_frame in t_frames:
t0_us = t_frame
t1_us = t0_us + dt_frame_us
t = events[:]["t"]
m = (t >= t0_us) & (t < t1_us)
events_m = events[m]
# Empty frame
frame_img = np.zeros((img_height, img_width))
for sub_event in events_m:
# show "off" (0) events as -1 and "on" (1) events as +1
event_sign = 2.0 * sub_event["p"] - 1
frame_img[sub_event["y"], sub_event["x"]] = frame_img[sub_event["y"], sub_event["x"]] + event_sign
img = plt.imshow(frame_img[:, ::-1], vmin=-1, vmax=1, animated=True)
imgs.append([img])
ani = ArtistAnimation(fig, imgs, interval=50, blit=True)
HTML(ani.to_jshtml())
Read Spike Data function to input into Nengo#
We can now load our data into a Nengo model using the readSpikeData
function we will create. Recall, input arguments to this function are:
data
:The path of the file to read from. Can be a .aedat or .events file. Format of the file will be detected from the file extension.pool
: Number of pixels to pool over in the vertical (first argument) and horizontal directions (second argument), respectively. The larger the pool, the fewer neurons required and the faster things run. NOTE: image indices must be evenly divisible by pool size in respective directions.img_height
andimage_width
: Dimensions of the camera data.
Recall: we can have positive (leading edge) and negative (trailing edge) spikes from our camera called the polarity of the event. In this case we have 2: one for positive events, one for negative. The first half of our neurons represent positive events, the second half negative.
DO NOT MESS WITH THE read_spike_data
function!
# We'll make a simple object to implement the delayed connection
class readSpikeData:
def __init__(self, event_data, img_height, img_width, pool=(1,1)):
self.xvals = event_data[:]["x"]
self.yvals = event_data[:]["y"]
self.time = event_data[:]["t"]
self.pol = event_data[:]["p"]
self.img_ht = img_height
self.img_wt = img_width
self.pool = pool
def step(self, t):
dt = .001
t_lower = (t - dt) * 1e6
t_upper = t * 1e6
times = self.time
indices = np.nonzero((times >= t_lower) & (times < t_upper))[0]
pool_y, pool_x = self.pool
data = np.zeros((self.img_ht*self.img_wt*2,), dtype=int)
for index in indices:
if self.pol[index] == 1:
# Manually flatten data using (i*y_len)+j **note x,y vals swapped
data[self.yvals[index]*self.img_wt + self.xvals[index]] = 1/dt
else:
data[self.img_ht*self.img_wt + self.yvals[index]*self.img_wt + self.xvals[index]] = 1/dt
# reshape the data so pooling computations are more intuitive
data_sz = data.shape
pos_data = data[0:int(data_sz[0]/2)]
neg_data = data[int(data_sz[0]/2)::]
pos_data = pos_data.reshape(self.img_ht, self.img_wt)
neg_data = neg_data.reshape(self.img_ht, self.img_wt)
if pool_x > 1 or pool_y > 1:
pooled_ht = int(self.img_ht/pool_y)
pooled_wt = int(self.img_wt/pool_x)
pooled_posdata = np.zeros((pooled_ht,pooled_wt))
pooled_negdata = np.zeros((pooled_ht,pooled_wt))
for i in range(0, self.img_ht, pool_y):
for j in range(0, self.img_wt, pool_x):
pooled_posdata[int(i/pool_y),int(j/pool_x)] = np.mean(pos_data[i:i+pool_y, j:j+pool_x])
pooled_negdata[int(i/pool_y),int(j/pool_x)] = np.mean(neg_data[i:i+pool_y, j:j+pool_x])
pooled_posdata = pooled_posdata.reshape(pooled_ht* pooled_wt)
pooled_negdata = pooled_negdata.reshape(pooled_ht* pooled_wt)
pooled_data = np.append(pooled_posdata, pooled_negdata)
return pooled_data
else:
return data
Build your model#
This should all look familiar! We are feeding in our input values (i.e. our spike data) through a node, representing the pixel changes with neurons, and reading the data out using probes.
Notes:
You will likely not need to pool, but feel free to give it a go just to ensure you understand the pooling function.
Ensure your sim time accommodates all frames of your video
pool = (1, 1) #(pool_height y direction, pool_width x direction)
# NOTE // image indices must be evenly divisible by pool size in respective directions!
inp = readSpikeData(events, img_height, img_width, pool)
model = nengo.Network(label="Spiking Data")
with model:
input_node = nengo.Node(inp.step)
input_neurons = nengo.Ensemble(int(img_height/pool[0] * img_width/pool[1] * 2), 1)
nengo.Connection(input_node, input_neurons.neurons, transform=1.0)# / np.prod(pool))
probes_nodes = nengo.Probe(input_node)
probes = nengo.Probe(input_neurons.neurons)
with nengo.Simulator(model) as sim:
sim.run(???)
View the data (using Nengo neurons!)#
This section should not require any edits. However, critically analyze your data! If I can’t tell what’s going on in this plot, you will earn a 0 in the lab. Why? Keep reading.
Well… if you see your images and you cannot actually see your events, remember that neurons are noisy. If your events are a single pixel in size, that will easily get lost in the noise. If this occurs, adjust your AER.
sim_t = sim.trange()
shape = (len(sim_t), int(img_height/pool[0]), int(img_width/pool[1]))
output_spikes_pos = sim.data[probes][:,0:int(img_height/pool[0])*int(img_width/pool[1])].reshape(shape) * sim.dt
output_spikes_neg = sim.data[probes][:,int(img_height/pool[0])*int(img_width/pool[1]):int(img_height/pool[0])*int(img_width/pool[1])*2].reshape(shape) * sim.dt
dt_frame = dt_frame_us * 1e-6 # this is in seconds
t_frames = dt_frame * np.arange(int(round(t_length_s / dt_frame)))
fig = plt.figure()
imgs = []
for t_frame in t_frames:
t0 = t_frame
t1 = t_frame + dt_frame
m = (sim_t >= t0) & (sim_t < t1)
frame_img = np.zeros((int(img_height/pool[0]), int(img_width/pool[1])))
frame_img -= output_spikes_neg[m].sum(axis=0)
frame_img += output_spikes_pos[m].sum(axis=0)
frame_img = frame_img / np.abs(frame_img).max()
img = plt.imshow(frame_img[:, ::-1], vmin=-1, vmax=1, animated=True)
imgs.append([img])
ani = ArtistAnimation(fig, imgs, interval=50, blit=True)
HTML(ani.to_jshtml())