Frequently Asked Questions for Computational Neuroscience, Exercise 1
1. Is
there any chance we can submit the exercise as a triplet ?
A.
Yes, but if you do, you have an additional task: The implementation for
the 1st question has to deal with non centered objects (i.e. translation
motion), that is input where the objects may also move on the X and Y coordinates.
2. What
should be done to get the images toolbox for Matlab 5?
A.
This and other toolbox are available for use on libra / lune (Schriber
building) and on the computers at the computation center (Exact science
building).
3. Does
the image toolbox include an edge detection function ?
A.
Yes.
4. Does
the image toolbox include an complex log functions (which means translating
a image matrix to its complex log matrix)?
A.
The basic Matlab contains the log function (try "help elfun"). It does
not convert {x,y} to {log(r),Theta}, just computes (complex) log of number
/ vector / matrix.
5. Should
the output be only the images that approached comparing to to one previous
image?
A.
The output should be those images indices, and the {x,y} coordinates of
the approaching objects in every image.
6. What
assumptions can we make about the shapes of the objects we get as input?
A. You
may assume the input objects are squares and rectangles only.
7. How
is it possible to detect the centers of several objects in one image? (it
seems as though we should make some assumptions based on information about
the input)
A.
You may assume that every object is a connected component (that is, one
may 'walk' from one pixel of the object to any other, without leaving the
objects. You can calculate object's center using binary image operations
from images toolbox.
8. What is the maximum angle
of rotation that can occur between two consecutive figures?
A.
No more than 8 degrees.
9. We understood from the project
description that we may assume there's a focus on the central of the object,
which means - it's center point does not change with every new image
we see. Is that correct ?
A.
Yes (But see Question 1).
10. Is there a preferred way
to look at the result of the log-polar transformation we've made,
assuming we have two double precision vectors: <theta> and <log r>
?
Should we implement our detection
method using these two vectors or is there a way to
transform it into a
matrix of some kind and use some method for matrix comparison instead?
A.
You may create a zero/one matrix, where one dimension represents the <theta>
and the other the <log r> and put 1's in the correct cell. You may then
watch the matrix using "spy" command. The detection may be performed both
on the vectors and the matrices.
11. When we want to convert
the edge matrix to the complex log coordinates, we want to insert the coordinates
to a new matrix. How do we determine the dimensions of the complex log
matrix (that is, the resolution of the new matrix, because the values of
the complex log coordinates are not integers) ?
A.
You may decide which resolution to use. Consider the maximum and minimum
possible change of each coordinate when you convert the complex output
values to integer indices for you matrix. For example, say you think every
2.5 degrees should be one unit on your matrix. then IndexFromTheta= fix(Theta/2.5).
Note that there isn't 0 index and that 365 deg = 5 deg.
12. What
is the maximum increase (or decrease) in size of the object (from
one image to the next)?
A.
No more than 10 pixels.
13. If I do the project alone
is it smaller then a project for two?
A.
No.
14. Is there a possibility
that there will be a rotate and an approach at the same time?
A.
No.
15. Using complex log on the
edge map: does that mean using complex log to create an image which its
axis is r, theta, and comparing it to the previous complex log created
image?
A.
Yes.
16. We are asked to perform
edge detection on the image, and only then move it to the complex log matrix.
In which way it helps? Why couldn't we perform the move to complex log
matrix on the original image?
A.
You may try performing the complex log on the original image, but you might
get to much data on the output matrix and not know what to do with it.
17. We
perform the complex log on the edge map, but still have problems reading
the output matrix. What can we do ?
A.
You may try to find the "corners" of the original object, instead of its
edges (that is perform an "corner detector") and then continue with the
log operator.
18. You've suggested that we
should use complex log to transform the image to polar coordinates. The
question is whether the transformation should be relative to a fixed point
in the image (e.g. center or corner) or relative
to the center of each object
(since the eye actually focuses on an object in order to detect approach).
A. Relative
to the center of each object.
19. How should we submit the
first part of ex1? (paper only? m-files on disk?)
A. Paper
only please.
20. I'll be on an army
duty ('Miluim'), during May. What should I do?
A. Whoever
got 3 or more days of army duty during May, gets 10 days extension for
the submission of the exercise. If you work with other(s), you all get
this extension. Please enclose an official document stating the dates of
the reserve duties.
21. Is it a mandatory
to accomplish the exercise using Edge Detection - Complex Log technique
or another strategy may be applied, lets say calculating area of the same
object in two consequent images and comparing it?
A. In
the 1st question you are to use the complex log operator. Edge detection
or corner detection are advised.
In the 2nd question you may
suggest other ideas, or the same algorithm. But this time you describe
how to perform it using neural network.
22. Can the objects on the
image overlap each other ?
A. You
may assume it cannot.
23. The part b - suggesting
a simple neural network, which implements the above task - May it be sufficient
that neural network will implement only final decision, when it receive
on the input complex log of 2 last images?
A. No.
24. Can objects appear or disappear
in the middle of the movie (images sequence)?
A. No.
25 Can an existing object transform
to a new figure, not just turning,
approaching... but a square
that transform into a rectangle etc. ?
A. No.
26. [Extra feature for triplets]
Is the xy movement limited in pixels for every iteration?
A. Yes,
No more than 8 pixels on every axis.
27. [Extra feature for triplets]
can an object move (xy) and rotate or approach at the same time?
A. Yes.
28. [Extra
feature for triplets] just to be sure: should the
xy coordinates in the result be the "current"
coordinates
at each picture?
A. Yes.
29. [neural
network] How are we to describe our network (part b) ?
A. It
should be AT LEAST described in a "detailed design" manner, meaning
that one could implement it.
30. [neural
network] Is the input to the neural network come from neurons that fire
"if pixels appears" at that point (for each x,y coordinate)?
A. Yes.
31. [neural
network] Can I assume that in each picture there is only one object which
is always centered ?
A. Yes.
32. [neural
network] Based on that - images may only move/rotate each time, and that
the object always appear in the same center, can I use a method which does
not use complex log at all ?
A. Yes,
as long as you explain why your solution is good enough.
33. [neural
network] Can we have neurons as about the number of the pixels in the picture
(for
example, input like in Q30, and later for processing)?
A. Yes,
it makes sense.
34. We
decided to use the median function (and not what described in Q7) in order
to find the center of every object. Is that right?
A. Any
solution for finding objects' center is good , as long as it works and
you explain it.
35. How
do we perform a corner detection? (it is quite hard since we don't know
the initial state of the objects; if they are rotated at first or not).
A. When
input is restricted to rectangles only, this is quite easy.
36. We
made the assumption that after movement on the XY axis , the center of
the
original object will remain in the object boundaries after its movement.
Can
we use it ?
A. Yes.
37. What
can be the refine if we will late in exercise
submission for a few (1,2,3) days?
A. There
is no penalty for the first 3 days. However, from the 4th day on, you
lose 4 points every day. This holds with the
exception of army duty (see Q20).
38. [neural
network] Can we assume that there is a direct mapping from x,y coordinates
to log (r) and theta coordinates (as it is in the brain) or should we find
a way to do it using only neural networks?
A.
No. You should specify how this is implemented with neurons.
39. [neural
network] Which kind of neurons may we use? Can we use neurons that fire
the value they sum, or does a neuron has to fire other 1 or 0 according
to a threshold?
A. The
neurons either fire or not (say 1 or 0). You may use weights on the inputs
of every neuron.
40. [neural
network] Can a neuron have as many inputs as wanted?
A. Yes.
41. Do
you have images with XY movement , so we can test our program ?
A. No,
you can easily create one yourself.
42. [neural
network] We noticed that all neural network models in Matlab are learning
networks - do we have to use this feature?
A. No.
43. [neural
network] It seems like the networks we wish to implement are prefixed -
how do we use the matlab to simulate networks that we already know their
weights and their firing threshold?
A.
You may do this by using basic matrix operations.
44. How
do we implement the firing threshold in the matlab models?
A.
You may do this as follows: let v be a vector of size n, holding the sum
of input for each neuron (the i'th entry is the sum of inputs to neuron
number i).
The
operation u = ( v >0.7) set the vector u to be one for every neuron (entry)
above threshold 0.7 . If you wish to have a different threshold for every
neuron you may do this by u = ( v > th), where th is a vector same size
as v, holding the thresholds.
45. Can
you give a simple example of an AND / OR gate in neural networks in matlab?
A.
Let v = ( a , b , c )' (the value of neurons a, b and c at stage
t).
Let
W be a 3 by 3 matrix:
[1 1
0;
0
0 1;
1
0 1]
Let
th = (1 0 0)'
The
operation u = ( A * v ) > th sets u to be the value of neurons (zero
or one) at stage t+1, where
a (t+1)
= a (t) and b (t)
b (t+1)
= c(t)
c (t+1)
= a(t) or c(t)
46. When
we transform our (x,y) to (log ( r ) , theta), must we take into account
the center of the object, or is it OK to rely on (0,0) as our center?
(If we rely on (0,0) for all transformations, it should be the same, right?)
A.
You must use the center of each object, and not some arbitrary point.
47. Currently
we are using an array for storing the maximum log ( r ) values for each
object in each frame. We can replace this array with 2 variables, but it
will be "messier", and less coherent. Should we choose the more efficient
way, or the more "understandable" way?
A.
You may assume there are O(1) objects.
48. Sometimes,
when we perform edge detection using "edges" from the Matlab toolbox, we
get an image where the edges of the objects are not connected (broken),
thus fooling the algorithm to think that the edges are different objects
("bwlabel" gives them labels). May we assume that the edge detection works
well, so that a connected component stays connected after the edge detection
phase?
A.
No. But you mat try performing the bwlabel on the original object and then
use the edge detection on each object separately.
49. May
we assume that the rectangles have thickness > 1 (that they are not lines)
?
A.
Yes.
50. Is
it necessary to find when the object rotates?
A.
Yes, though it is more important to detect the object is approaching.