Frequently Asked Questions:

 

Exercise 2

 

Some of the questions on exercise 1 are relevant for this exercise too (see below).

 

Q1. We looked up the definitions of the three color blindness conditions that were mentioned in the exercise. What we find everywhere is different then the definition given in the exercise. To be more specific, the correct definition seems to be :

1- Protanopia - red-blindness  - lacking the long-wavelength (“red”) sensitive retinal cones.

2- Deuteranopia - green-blindness - lacking the medium-wavelength  (“green”) cones.

3- Tritanopia - blue-blindness - lacking the short-wavelength (“blue”) cones.

 

A1. You are absulutly right. There was an error on the exercise definition (it is now corrected). Please use the above numbering for the ‘Type’ parameter.

 

Q2. Is there some simple way to create a random circles mosaic for the test cards?

A2. I am not aware of a single command that creates such a mosaic.

 

Q3. Where can I read about the visual system?

A3. You are encouraged to read the relevant chapter in: Physiology of Behavior, by Neil R. Carlson.  This book is available at the Social sciences library.

 

Q4. I am trying to understand the value gained by using circles of different size. After all with sufficient resolution every picture can be displayed

by constant size and place circles on a plane as in the television? And even more why circles and not squares?

A4. The circles are meant to be not too small, and have gaps. Can you think why this is important (hint: recall the camouflage example we discussed in class).

 

Q5. Is it OK to fill the matrix with random cubic clusters, such that each cluster contains a different, randomly chosen, pattern of circles, in order to create the mosaic, as in the following example?

A5. As the gaps form a noticeable grid, this may affect a subject taking such test. It is therefore suggested to come up with a better solution.

 

Q6. We couldn't think of a card or find an example on the net with 2 separate images : one that only normal sees and the other that only color-blind sees.

We spent lots of time trying to do it, but with not much success...

A6. The basic idea is to use spots that are of different color, but look the same the for a color blind person. Creating an object that is visible only to the normal person using such colors is straight forward. To do the other way round – an object that is visible only to the color-blind person you can use spots of the above colors  to ‘confuse’ the normal person. You can then combine these two on the same card.

 

Q7. Do we need to build a network built from elements that act as formal neurons, or are we supposed to write an algorithm that simulates the action of such a network? If it is a network, how simplified should it be - do the neuron's functions all need to be simple "integrate and fire" functions, or can we more complex operations (multiplying matrices, etc..)?

 A7. The idea is to build a network, integrate and fire wherever possible. In any case you should explain what you do, and in what aspects is you model similar (or not) to the human brain.

 

Q8. How much time would you consider reasonable for the generation of 1 card? Can I use loops to create the circles?

A8. For a 500 by 500 pixels card, a few seconds should suffice. But certainly not more than a minute.

For an answer to the second question – see the first question of exercise 1.

 

Q9. Is there a matlab function that draws circles I could use?

A9. I don’t know of a ready made function that can help for the matrix generation, but creating a Boolean matrix that represents a circle is quite straight forward.

 

Q10. Will you be testing our code on different inputs than what we show in the report?

A10. Probably.

 

Q11. NormalOut, BlindOut – (for this input) should these contain only the shape? (as in file1) or all the mosaic on a gray scale as in attached file (file2).

A11. The second.

 

Q12. NormalIn, BlindIn - Could you give examples for the inputs?

A12. The following zip file (example2.zip) contains an input file (a .mat file) of identical NormalIn, BlindIn matrices, and a gif file demonstrating the generated card.

Note that example is for the sole purpose of demonstrating input format. The output card is not necessarily good for any colorblindness testing.

 

Q13. Can we build the test cards and the neural network as though the retina's cones work in the RGB system and not the LMS system as they really do? Although this will result in cards that are incorrect as real colorblindness tests, it will allow to build a more correct and powerfull model for the second part of the exercise.

A13. The idea is to make both the first and the second part is real as you can. In particular you are referred to the bibliography of the course (or to the book mentioned on question 3) to find out the respond of each cone type, as a function of the wavelength.

 

Q14. In question we're supposed to return matrix Test, which represents the test-card. This matrix represents a color image, which in MATLAB can be either:

1. Indexed image, referring to a given COLORMAP (which is another matrix); -or-

2. Truecolor image, which is 3-dimensional: each pixel holds it 3 RGB values.

The exercise says that Test and NormalIn should both be of equal dimensions. However, this doesn't let us use a Truecolor image; and we don't have a place

to return a Color Map, on the other hand. Which way should we use to return the color Test image?

A14. You may use either. Or other representation if you prefer. By ‘equal dimensions’ I only meant that the number of pixels on the x axis is the same in the input and the output matrices (and similarly for the y axis).

 

Q15. Following the question in class regarding the testing of the implementation of the second part:

A15. Please also supply a Matlab file main2.m, that performs the following task:

Its input is given as a gif file input2.gif, which is an image of a test card.

Its output is the file out2-3.ps which demonstrates the responses of the four types of visual systems (Normal, and the three types of color blindness).

 

Q16. We checked Carlson's book and there are only the peak values of the cones. The functions are plotted but not given mathematically.

A16. The graph there represents (biological) measures. You can assume a Gaussian behavior of the functions, and extract the relevant parameters from the graph (e.g. sigma).

 

Q17. Is it OK to use convolution?

A17. Yes, as long as you explain why this reasonably demonstrate an operation of the network.

 

Q18. Is it OK to use several matrices for representing a single level of the processing (e.g, a matrix for each type of the photoreceptors)?

A18. Yes.

 

Q19. It seems that not every mixture of RGB values can be represented by a single wavelength (e.g, (R,G,B) = (1,1,1) – white). Is that correct?

A19. Yes.

 

Q20. We found a ready made Matlab code that transforms RGB representation to LMS representation, which, as far as we understood is exactly the excitation measure of the 3-types of cones. Can we use it?

A20. No. Regardless of the question whether LMS in general, and any specific implementation of it is suitable for this exercise, you are expected to implement yourself the first stage as well. That is, implement the transformation from the input to the respond of the photoreceptors.

 

Q21. Where can I find the exact wavelength of the Red/Green/Blue of RGB?

A21. A short search on the internet can help. As well as a short trip to the library.

 

Q22. In case we have some matrices representing a single level of the process, Is it OK to operate a convolution on each of the matrices separately? We are worried that we might lose color differences that the convolution was suppose to discover if we check each factor by itself.

A22. This is indeed one of the decisions you should make in this exercise. Remember that you are trying to emulate the human visual system.

 

Q23. Can we use the methods from an article we found?

A23. Yes.

 

Q24. I did not understand how to save an image in PS format in matlab.

A24. Try lookfor command (e.g, lookfor postscript) or the help navigator.

One relevant command may be ‘print’.

 

Q25. I found an article and web-page which seems relevant, and may help me solving the exercise. Can I use them?

A25. Yes, as long as you implement the code yourself, and explain what you did and why it makes sense.

 

Q26. The only way I found to create a postscript file is by rinting/saving the current figure, there are no functions that receive an image and save it as ps. This means that the following won't work: do_show = 0; do_save = 1; (must show to save) Can I assume that this combination will not be given?

A26. Yes.

 

Q27. Can we send you the exercise document via email instead of a hard copy?

A27. No.

 

Q28. I managed to simulate the photoreceptors and the ganglions, but I’m not sure what should be the next phase of the simulation.

A28. In general you are can also simulate the very basic first processing of V1, e.g, edge detection, using various DOG (Difference Of Gaussians) filters.

 

Q29. We could not find a Tritanopia color-blindness test on the internet. We did find a link, that says the Ishiara color-blindness test does not test for Tritanopia, and that there are other, more exact tests for this condition.

A29. This should not be a problem. You are generating the test (and not using a ready-made one).

 

Q30. May we use more than two postscript files for the output (say one for each type of card and each type of blindness)?

A30. No. But please read question 15.

 

Q31. In case we detect lines in various orientations, is it enough to use 0,45 and 90 degrees?

A31. The number of orientations types is indeed one of the parameters of your model, and is left for you to decide.

You can explain how it effect the performance, and the resemblance to the biology system.

In this parameter, as well as in others, you don’t have make (and sometimes cannot) a 100% accurate simulation.

 

Q32. Can I assume that the RGB values represent the magnitude of response of the three cones types?

A32. No. You are expected to calculate the cones’ responses.

 

 

Exercise 1

 

Q1. Is it strictly forbidden to use loops?

A1. Definitely not. If you use loops in a way that does not cause your program to work slowly then it is fine. Note that there is almost always a way to avoid loops, and usually it will make your program work faster.

 

Q2. Who decides how to perform the experiments: which parameters to fix, which parameters to change, how many tests to perform, how does the screen (graphics) look  etc.?

A2. You do.

 

Q3. What names should I choose for files and functions that were not specified on the exercise description?

A3. Use any meaningful name you want, and add it to the documentation (2nd page).

 

Q4. I am taking the CNS course, but am not a CS student, and therefore don't have an account. What should I do?

A4. Please refer to the system (6408823). They will help you to open an account.

 

Q5. I have a problem. When shifting a set of the point in order to create an RDS pair, I am left with “empty slots”, which, if left black, “reveal” the shifted shape. How do I solve it?

A5. This is indeed one of the problems you are to deal with.

 

Q6. Second part of the exercise: When displaying the images one after the other, is it a 'one shot'?  In other words - show the pair once and hope the user will identify the shape or show them in a loop until identified?

A6. One shot (for each pair) is what we originally had in mind. But you may conduct your experiment otherwise.

 

Q7. Second part of the exercise:  'Spy' takes ~1/2 a second to display the figure (100x100) (it looks like it draws is from left to right) so displaying them one after the other does not occur instantly and identifying the shape is almost impossible. (If I open two figures and switch between them using windows alt-tab I can easily identify the shape) Is there some other way of displaying them one after the other that is faster and will occur instantly?

A7. Switching the images can be done, for example, by drawing them on separate figures, and using the ‘figure’ command. The ‘pause’ command may help if you have problems with graphics update, i.e, it may help forcing a graphics update at a given time. These are, however only technical suggestion. Other solutions may also be available.

 

Q8. Does a single RDS couple contains all the shape in the shape list , or only 1 or two?

A8. All.

 

Q9. Does the subject need to recognize several shapes at the same time , or only one?

A9. You decide how to construct the experiment.

 

Q10. We had problems to create our .mat file and load it. Can you publish some inputs and include a .mat file so we can understand the format.

A10. Done. Look at the exercises page.

 

Q11. Is there some kind of method of creating an rds? How do you create an rds couple given a list of shapes?

A11. This is exactly the technical task you are facing. I  therefore cannot solve it for you.

 

Q12. I planned to create a mmi with matlab using GUIDE, is this ok? Or is it not preferred for UNIX environment?

A12. To see what you can and cannot do on UNIX I suggest you run UNIX. You may consult the system if and how you can do it from home. You can also use one of the labs. Note that the technical specification was mainly in regard to the first part of the exercise. The way to conduct your experiment is more flexible, and given to you to decide.

 

Q13. What do you mean be screening the picture , or displaying black background, as described in the test possibilities?

A13. Just what it says: you see the first image, then you don’t (and see something instead, say a black background) then you see the second image. Then you don’t.

 

Q14. The output you published show some random distribution of dots - we saw no connection to the input file. Additionally, we don't understand what our experiment will check (we don't see how one can't identify a shape in an RDS pair).

A14. It seems that you answered your question: The output in the example contains two non-intersecting squares. You can see it by quickly alternating the view from one image to the other.

 

Q15. Can we assume that the input will not ask to shift the shape outside of the matrix boundries (it seems that this is a wrong input, and the exercise instructions say that we don't need to deal with that)?

A15. Yes.

 

Q16. When adding a 'screening' figure to the experiment I find that it is impossible to identify the shapes - Does this make sense?

A16. Not impossible, but definitely harder. You can try masking for shorter periods.

 

Q17. How many experiments do you expect us to do?

A17. Enough so that you can derive conclusions.

 

Q18. Is there some way of getting mouse actions on an axes? and get where it was clicked?

A18. Did you try “lookfor mouse” ?

 

Q19. It seems that the input file is illegal and/or not consistent with the output files.

A19. You are right. There was a typo there. The input file is now corrected.

 

Q20. I am using sprand(height,width, density) to create the random matrix with density given to function 'rds' Matlab writes about sprand that: "... will generate significantly fewer nonzeros than requested if m*n is small or density is large".

   Do we need to take this into consideration as well or is using sprand(height,width, density) good enough?A20. Please first read the matlab help for any command you want to use.

A20. Unless you are sure this function fits our purpose, I suggest you “manually” generate your matrices (say by the ‘rand’ command and a relevant Boolean operator).

 

Q21. When using 'Spy' to display the image the result is very different from the bmp created with imwrite. When I use spy for a large matrix (300,200) I get almost a 'black' picture whereas for imwrite I get a good one I tried changing the marker size of the 'Spy' command but when decreasing it I get even weirder results. Could you please send an example of the figure that you get in your example when using Spy? and let me know why this is happening?  (Basically I call Spy and imwrite on the Same matrix).

A21. Maybe you have one of the following two problems:

If you are using a 0/1 matrix you may try multiplying it by 256 for the viewing with ‘image’ command.

If you are using spy and get strange output – maybe you have negative values or close to zero values.

 

Q22. We wanted to ask you if a simple command line program (with no GUI) will be sufficient for the experiment.

A22. I will not interfere in your experiment planning. Just remember to describe any relevant detail of your experiment on the report. That is assuming you meant the activation of the experiment (such as parameter setting) and not the presentation of RDS and the data collecting.

 

Q23. Is it possible to collect the data while the experiment is performed on paper and then insert it to matlab or excel to show ,graphs since when the experiment is performed and the timeframes are short there is no time to receive the different inputs.

A23. No. It should be automatic. The data collecting and registering may be performed, for example, after each couple of RDS is presented. This should resolve your problem.

 

Q24. My Matrix has only 0/1 values. with high density (0.8) When I call spy I see an almost black picture. If I 'enlarge' the picture then at some stage I get the result I expect. If I change the marker size to '4' It looks better but then I get kind of white line seperators, then If I make the figure a bit smaller it looks good.

I think this is due to spy putting a 'marker' at the position and then when there are alot of 'ones' then the markers overlap and therefore seem black. This is why it doesn't happen with smaller matrices. Whereas with image only the block itself is filled (no overlapping).

Can I use image command instead of 'Spy' to display the rds?

A24. This problem (or feature ;-) )  of ‘spy’ is known.  I think you explained it correctly. For the first part of the exercise you are asked to use the ‘spy’ command. For the experiment you may use any command that you wish. Note that this problem occurs only with specific density parameter range (and screen size) so controlling it may also help.