Show all posts by user
ML programming tips
What do you mean by rendering? Are you referring to your mex file generating the video?
Why is the video creation time so critical? It does not seem that you need a short ITI. If you increase ITI to 2 sec, there will be no noticeable delay due to the video creation.
How do you create the stimulus? Do you assign a string like 'gen(func_name)' to the C variable in the userloop?
A
by
Jaewon
-
Questions and Answers
There is no time you can save, as long as you keep the video as a matrix. The delay occurs when a presentable stimulus is created out of the matrix, so reusing a stimulus that is already created does not take additional time. If you keep the matrix, however, a new stimulus has to be created from it every time and there is no time saving.
You should talk more about what is the purpose of doing
by
Jaewon
-
Questions and Answers
What NIMH ML and the NI board send out is just a series of 0s and 1s. It is Neuralynx that reads them as negative or positive numbers. So I guess you need to look into Neuralynx.
The strobe test sends out a bunch of numbers in which only one binary digit is set (e.g., 1, 2, 4, 8, 16, ...), so it is easy to detect a pattern in the error, it there is any.
by
Jaewon
-
Questions and Answers
This is something you need to figure out yourself. I do not think the network eventcodes will be directly supported in NIMH ML, because 1) the network communication has a relatively long, variable delay, so users should understand what they are doing and use it at their own risk and 2) there are a bunch of tools out there already. Use search engines.
If there is an HTTP server waiting on the o
by
Jaewon
-
Questions and Answers
Maybe it was I that was not clear. My question was if there was anything that prevented you from sending the network eventcodes.
by
Jaewon
-
Questions and Answers
I do not understand. Is there anything that prevents you from doing so?
by
Jaewon
-
Questions and Answers
If your task starts with central fixation, you can also slice Neuralynx signals into each trial, subtract the X and Y values at the time of fixation in each trial and put them back together.
You have the same timestamps in both Neuralynx and BHV2. Either way should be easy.
by
Jaewon
-
Questions and Answers
It looks like a grounding problem. From what you said, I guess you plotted the signals recorded by your signal processing system, didn't you?
I do not think there is much I can help as long as you use the voltage input. In fact, NIMH ML has nothing to do with this problem, if your AI configuration is correct.
https://monkeylogic.nimh.nih.gov/docs_NIMultifunctionIODevice.html#AIGroundConf
by
Jaewon
-
Questions and Answers
Check on in the menu.
https://monkeylogic.nimh.nih.gov/docs_MainMenu.html#OtherDeviceSettings
by
Jaewon
-
Questions and Answers
One problem of that approach is that you cannot control the number of dots displayed in each frame. It is better just to draw in a square from the beginning.
-----
If you download the package again, you will see a new adapter that does this, although I am not sure how useful it will be.
https://monkeylogic.nimh.nih.gov/docs_RuntimeFunctions.html#RectangularRDM
Regarding to how to use it
by
Jaewon
-
Questions and Answers
* Changes in 2.2.35 (Apr 3, 2023)
+ The button is added to the calibration tools so that changes can be
undone without exiting the tool.
+ RewardScheduler can set which reward channel to trigger now.
- An error that occurred when replaying GraphicContainer is fixed.
- A problem that visual stimuli show up again during ITI when users manually
update the subject screen is f
by
Jaewon
-
News
First of all, using TaskObjects is a traditional way to create stimuli. TaskObjects are not mixed with stimuli created with adapters. In Timing Script v2, TaskObjects can be presented only with create_scene().
https://monkeylogic.nimh.nih.gov/docs_RuntimeFunctions.html#create_scene
In the following code of yours, a PIC TaskObject is assigned to ImageGraphic. ImageGraphic does not have such a
by
Jaewon
-
Questions and Answers
Try the example in the post you linked. If you see the same problem with the example, check if your NIMH ML is up-to-date. If it is up-to-date, there may be something wrong in what you wrote. However, I cannot tell what you might miss, since you did not show any of your code.
by
Jaewon
-
Questions and Answers
Changes in 2.2.34 (Mar 2, 2023)
+ The option to set the pixels per degree to that of the central one degree
rather than the average across the entire screen is added. This is useful
for getting more accurate stimulus size and coordinates, when the subject
screen is larger than 20 degrees.
https://monkeylogic.nimh.nih.gov/docs_MainMenu.html#AdjustedPPD
+ A new property, Al
by
Jaewon
-
News
The code does not work even for the first trial, due to an error. See the parts rewritten in red again. I guess you do not see the dots, because you are presenting black dots on the black background. Start from a small, working example and expand it gradually.
by
Jaewon
-
Questions and Answers
Of course, I do. But you took enough of my time to make me answer for trivial things and your mistakes, so I want you to do some homework on your own. Do you read other people's postings?
by
Jaewon
-
Questions and Answers
Your code is not working as you think, because you did not write it correctly. See the lines that I colored in red.
Learn how to separate stimuli and behavior. There is no need to rewrite the entire chain every time. If you want to change the position of a circle, just assign a new coordinate pair to the Position property.
https://monkeylogic.nimh.nih.gov/docs_ScriptingScenes.html#GraphicAdap
by
Jaewon
-
Questions and Answers
What you captured is not the calibration matrices of the 2-D Spatial Transformation. Are you sure what you are talking about?
by
Jaewon
-
Questions and Answers
There is no way that MLConfig saved in the datafile can differ from the settings on the menu. So I guess you loaded another task or subject profile after importing the calibration but before starting the task.
by
Jaewon
-
Questions and Answers
Have you read the manual about the condition selection function?
https://monkeylogic.nimh.nih.gov/docs_TaskflowControl.html#Conditions
by
Jaewon
-
Questions and Answers
If the size of your subject screen is larger than 20 degrees, you will see some discrepancy between the stimulus size calculated with the pixels per degree (PPD) and the actual size on the screen. It is because the tangent function is not linear beyond the angle of 20 degrees. It is a mathematical principle. There is no obvious solution to this, if you are using a 2-D screen.
If you want to re
by
Jaewon
-
Tips
You can still do the calculation during the ITI by using the userloop or alert_function and change the images for the next trial. Some people are already doing the analysis that you mentioned with NIMH ML.
https://monkeylogic.nimh.nih.gov/docs_CreatingTask.html#Userloop
https://monkeylogic.nimh.nih.gov/docs_AlertFunction.html
The data recorded in the previous trial is available in both userl
by
Jaewon
-
Questions and Answers
Why do you think the voltage output of the eye tracker will be the same as what you save from its software? Does its manual say so?
by
Jaewon
-
Questions and Answers
You can't. Choose only the parameters that you are actually going to use.
by
Jaewon
-
Questions and Answers
You can do this much more easily and accurately, if you do it offline. Is there any reason you need to do it online? And you should think about the delay in the online calculation.
by
Jaewon
-
Questions and Answers
The time of Event 18 is not the trial end, either. If you do not know this yet, please read this manual page.
https://monkeylogic.nimh.nih.gov/docs_GettingStarted.html#AlignTimestampsAndAnalogData
In NIMH ML, each trial starts at Time 0. Events 9 & 18 are stamped sometime during a trial just like any other code. If the timestamp of the 9 is 1.234 ms, for example, it means that the 9 was s
by
Jaewon
-
Tips
The input signals during ITI are not recorded, unless you turn on the ITI recording option.
by
Jaewon
-
Questions and Answers
Please explain what kind of analysis the AlphaOmega system will do. The Success property may not reflect the information you need.
by
Jaewon
-
Questions and Answers
1. Try 'General Input 1' again. You must read it from AnalogData.General.Gen1. AnalogData.EyeExtra is only for TCP/IP.
2. See this post. https://monkeylogic.nimh.nih.gov/board/read.php?3,1515
by
Jaewon
-
Questions and Answers
You can use dashboard() or whatever way you want to display the information.
https://monkeylogic.nimh.nih.gov/docs_RuntimeFunctions.html#dashboard
by
Jaewon
-
Questions and Answers