The timestamp of an eventmarker always indicates the actual time that the event occurred. There is no need to estimate or adjust anything.by Jaewon - Questions and Answers
Sometimes it is convenient to be able to abort a trial in the middle. You may accidentally set a long wait time or a subject may not perform one last action that needs to complete the trial. To stop the current trial and pause the task, you can set a hotkey like the following. The key aborts only one eyejoytrack or one run_scene at a time, so you may need to type it multiple times until the trby Jaewon - Tips
Please try the new version of NIMH ML that I just uploaded. There was initially a reason why STM was programmed in that way, but I think I sorted out and handled all possible cases this time.by Jaewon - Questions and Answers
I am not sure what you mean by "take into account the shift in timing", but skipped frames matter mostly when visual stimuli are turned off. For example, you want to present something for 100 ms, but it can be shown longer than 100 ms, if skipped frames occur unfortunately right at the time you try to turn it off. If frames are skipped when stimuli are about to be presented, both onsetby Jaewon - Questions and Answers
* Changes in NIMH MonkeyLogic 2 (Nov 18, 2019) + SND and STM objects stop when turned off explicitly by toggleobject. Also STM resets the output to 0. Both objects can be reused multiple times during a trial. They first should be explicitly turned off by toggleobject (although the stimuli already ended). Then SND has to be rewound by the rewind_sound function before toggby Jaewon - News
That is actually a well-known technique to avoid the Windows audio stack and accomplish zero latency. It is the mixer in the Windows audio stack that combines individual sounds and plays them through one common output channel (speaker, headphone, etc.). DAQ devices do not have a mixer, so you cannot combine two sounds by playing one while the other is being played and should mix them yourselfby Jaewon - Questions and Answers
You can use analogoutput of DAQ boards, if you need zero-latency sounds, but then you cannot mix multiple tones on the fly and may need an amplifier.by Jaewon - Questions and Answers
I probably deleted it when I edited the previous comment. Analogoutput channels on the same device cannot be controlled individually, so there is no point of assigning two STM objects, unless you want to send out two waveforms simultaneously. Delete the other Stimulation channels, except Stimulation 1, or run putsample like the following. putsample(DAQ.Stimulation{1},zeros(1,length(DAQ.Stimuby Jaewon - Questions and Answers
What I meant was that the time when a sound comes out of the speaker is ~40 ms later than the time recorded by the eventmarker of toggleobject. It is because sounds have to go through the Windows audio stack when they are played via the sound card. The time that eventmarkers take to reach an external machine is just ~0.2 ms, so it can be ignored. % This loop plays 4 times the BASAL toneby Jaewon - Questions and Answers
I have a little concern about the timing of your stimuli though. You used idle() to leave short intervals between the tones, but idle() is not that accurate timingwise because it has to redraw the control screen from time to time during the wait. So, if the tones should be precisely at 150-ms intervals, I would use a userloop function, create a long wav sound there, in which 5 tones are combinby Jaewon - Questions and Answers
You can do this as a temporary solution. putsample has to be called after STM is turned off. If you are sending out a 5V flat pulse, I would use a TTL. It is fast and easier. toggleobject(1); % TaskObject#1: crc(0.2,[1 1 1],1,0,0) ontarget = eyejoytrack('acquirefix',1,3,10000); if ~ontarget trialerror(4); % no fixation return end toggleobject(2); % TaskObjectby Jaewon - Questions and Answers
You don't need to (and should not) change anything if the latency test looks fine. It is just because your computer is too busy. Using a smaller resolution will be helpful, if you don't mind a little blurriness. I think 1 to 4 skipped frames are not that critical, although it depends on what you want to do.by Jaewon - Questions and Answers
Rewind the used sounds before turning them on again. https://monkeylogic.nimh.nih.gov/docs_RuntimeFunctions.html#rewind_object Sounds used to be rewound automatically when turned off, but they don't anymore. It is because now very long sounds can be streamed from files and rewinding them may take long while not everybody needs it.by Jaewon - Questions and Answers
One easy fix is to send out the waveform to the end (assuming that the waveform has a trailing 0) whether the fixation is broken or not. Do you need to stop it in the middle? In some applications, the last value should stay there, so I need to think about how to accommodate this.by Jaewon - Questions and Answers
Yes, that is the reason. I reduced the size by taking only 16-bit color formats, but it is still huge. Compression methods work only when the video is saved as a file and you cannot keep the compressed size once you load the video into MATLAB, so it is necessary to choose the frame size of the video wisely. I am thinking of exporting the videos as separate AVIs, but then we cannot keep them inby Jaewon - Questions and Answers
Just do not include any adapter that tracks behavior, such as WaitThenHold. You can program the same thing in the scene framework like the following. tc = TimeCounter(null_); tc.Duration = sample_time; scene = create_scene(tc,sample); run_scene(scene,20); idle(0); % Clear the screen. Not necessary if this is not the lase scene.by Jaewon - Questions and Answers
Please download the new version of NIMH ML and try the task attached below. Now SingleTarget gets the target position from the child adapter (CurveTracer in the attached code), if no Target is assigned.by Jaewon - Questions and Answers
* Changes in NIMH MonkeyLogic 2 (Nov 8, 2019) + A new adapter, AnalogInputMonitor, is added for online analog input monitoring. https://monkeylogic.nimh.nih.gov/docs_RuntimeFunctions.html#AnalogInputMonitor + During the I/O Test, the voltage range of each General Input in the display can be adjusted. To change the range, click one General Input panel and then click the currenby Jaewon - News
You can use WaitThenHold but make its HoldTime 0. Then, when the WaitThenHold succeeds, run another scene that just shows the sample image without checking any behavior. Does it make sense?by Jaewon - Questions and Answers
Use the userloop function. https://monkeylogic.nimh.nih.gov/docs_CreatingTask.html#Userloop There are a few example tasks under the task\userloop directory, too.by Jaewon - Questions and Answers
I built the scene framework (timing script v2) for this kind of purpose. https://monkeylogic.nimh.nih.gov/docs_CreatingTask.html#RuntimeVersion2 Take a look at the following examples in the task directory. They are not exactly the same as what you described but do something similar. task\runtime v2\10 pursuit eye movement task\runtime v2\11 curve trace If you explain how you are going toby Jaewon - Questions and Answers
The "confirmation" on the command window just tells you that the software did what it was supposed to do and did not detect any error in doing it. It does not necessarily mean that you should be able to see some changes on your "neural recording monitor", since it is not a test for your hardware. It seems that you did some tests on your hardware, but you need to describe thby Jaewon - Questions and Answers
I added a property (OnsetTime) to ImageChanger so that you can tell when the "correct" image was presented. The OnsetTime (an n-by-1 vector) is NaN for the images not presented during the scene. I think the easiest way to retrieve the eye trace is to use get_analog_data(). correct_image = 10; onset = img.OnsetTime(correct_image); if isnan(onset) % The correct image was noby Jaewon - Questions and Answers
I should mention that there is an option to send out the TTL as a second analog output, IF you have two analog output channels AND program with the scene framework. I thought we might need a new adapter but the Stimulator adapter is already doing it. In that way, two pulses can start and stop exactly at the same time.by Jaewon - Questions and Answers
I cannot update the download packages at the moment. Please overwrite your mlconfig and mlread with the attached files and see if you can read the task.by Jaewon - Questions and Answers
Please see the following link. https://monkeylogic.nimh.nih.gov/docs_GettingStarted.html#FormatsSupported Also these commands may help. https://monkeylogic.nimh.nih.gov/docs_RuntimeFunctions.html#behaviorsummaryby Jaewon - Questions and Answers
I can examine why the conversion failed, if you send me the config file. I think you can use Private Messages.by Jaewon - Questions and Answers
Thank, Mitch. Please give me some time to take a look at the files.by Jaewon - Questions and Answers
Delete the configuration file of the task (*_cfg2.mat) and try again. Some fields in the MLConfig struct have changed to store information of two eye trackers. NIMH ML is supposed to convert the format seamlessly when loading old config files, but it doesn't seem to have gone well somehow.by Jaewon - Questions and Answers
toggleobject does not know how long the stimulations will last. If you call two toggleobjects in a row, you are just turning them off as soon as they are on. Try inserting "idle(2000);" or "pause(2);" between them and see if it works for you. The latter is more precise in timing but it won't update the screens for 2 s. If they are not precise enough, we can make an adaby Jaewon - Questions and Answers