I guess you may be saving a huge variable whose size is growing every trial.by Jaewon - Questions and Answers
Try the MultiTarget adapter. https://monkeylogic.nimh.nih.gov/docs_RuntimeFunctions.html#MultiTarget You cannot access adapter properties on real time, unless you make your own adapters. What you can do in the timing script level is to set input properties, call run_scene() and check output properties after the scene ends. Basically you are interacting with adapters from their outside. The &qby Jaewon - Questions and Answers
You said you used two SingleTarget adapters. Check their Success properties after the scene ends. One should be true, while the other is false. https://monkeylogic.nimh.nih.gov/docs_RuntimeFunctions.html#SingleTargetby Jaewon - Questions and Answers
There is no point of accessing XYData of a tracker directly in the timing script. The field is automatically filled with new samples by NIMH ML, when an adapter chain including the tracker is executed by run_scene. You need to write your own adapter and build an adapter chain with it and joy_. In the adapter, you can access XYData like obj.Tracker.XYData. https://monkeylogic.nimh.nih.gov/docs_Ruby Jaewon - Questions and Answers
Anything is possible, but can you explain more about what kind of movement you want to see? If you show an image on the position of the joystick cursor (which you can do with the Joystick Cursor menu and the showcursor function), it will move just as the joystick is moving, so won't go straight left (or up) nor look like smooth movement.by Jaewon - Questions and Answers
I don't understand what you mean by "the exact start". They both indicate the same event, screen flipping, but the timestamp of the eventmarker is acquired when the Strobe Bit actually goes HIGH, which is 125 us after, by default. That delay is necessary to allow some time for digital output lines of Behavioral Codes to be stabilized. https://monkeylogic.nimh.nih.gov/docs_MainMby Jaewon - Questions and Answers
If you manipulate stimuli directly with their MGL object IDs, NIMH ML does not get a chance to collect necessary information and the resulting datafile cannot be replayed. Some adapters accept MGL object IDs, but it is for rare cases that require special control even though replayability is sacrificed. Usually you are not supposed to use MGL object IDs when writing timing scripts. To use TaskOby Jaewon - Questions and Answers
Show me your task and the data file please. I am out of the office for the rest of the week though.by Jaewon - Questions and Answers
Now you can compress the webcam data by exporting it as MP4. Please download the latest version.by Jaewon - Questions and Answers
I added a compression option to the webcam. Choose 'Export as MP4 (or AVI)' on the menu. This option will create the video of each trial as a separate file, which is necessary for the compression. The start time of the first frame is not 0 in the webcam video, so you should refer to the AnalogData.Webcam1.Time field for the timestamp of each video frame. For data files already collecby Jaewon - Questions and Answers
Please download the package again. Sorry for the inconvenience.by Jaewon - Questions and Answers
The timestamp of an eventmarker always indicates the actual time that the event occurred. There is no need to estimate or adjust anything.by Jaewon - Questions and Answers
Sometimes it is convenient to be able to abort a trial in the middle. You may accidentally set a long wait time or a subject may not perform one last action that needs to complete the trial. To stop the current trial and pause the task, you can set a hotkey like the following. The key aborts only one eyejoytrack or one run_scene at a time, so you may need to type it multiple times until the trby Jaewon - Tips
Please try the new version of NIMH ML that I just uploaded. There was initially a reason why STM was programmed in that way, but I think I sorted out and handled all possible cases this time.by Jaewon - Questions and Answers
I am not sure what you mean by "take into account the shift in timing", but skipped frames matter mostly when visual stimuli are turned off. For example, you want to present something for 100 ms, but it can be shown longer than 100 ms, if skipped frames occur unfortunately right at the time you try to turn it off. If frames are skipped when stimuli are about to be presented, both onsetby Jaewon - Questions and Answers
* Changes in NIMH MonkeyLogic 2 (Nov 18, 2019) + SND and STM objects stop when turned off explicitly by toggleobject. Also STM resets the output to 0. Both objects can be reused multiple times during a trial. They first should be explicitly turned off by toggleobject (although the stimuli already ended). Then SND has to be rewound by the rewind_sound function before toggby Jaewon - News
That is actually a well-known technique to avoid the Windows audio stack and accomplish zero latency. It is the mixer in the Windows audio stack that combines individual sounds and plays them through one common output channel (speaker, headphone, etc.). DAQ devices do not have a mixer, so you cannot combine two sounds by playing one while the other is being played and should mix them yourselfby Jaewon - Questions and Answers
You can use analogoutput of DAQ boards, if you need zero-latency sounds, but then you cannot mix multiple tones on the fly and may need an amplifier.by Jaewon - Questions and Answers
I probably deleted it when I edited the previous comment. Analogoutput channels on the same device cannot be controlled individually, so there is no point of assigning two STM objects, unless you want to send out two waveforms simultaneously. Delete the other Stimulation channels, except Stimulation 1, or run putsample like the following. putsample(DAQ.Stimulation{1},zeros(1,length(DAQ.Stimuby Jaewon - Questions and Answers
What I meant was that the time when a sound comes out of the speaker is ~40 ms later than the time recorded by the eventmarker of toggleobject. It is because sounds have to go through the Windows audio stack when they are played via the sound card. The time that eventmarkers take to reach an external machine is just ~0.2 ms, so it can be ignored. % This loop plays 4 times the BASAL toneby Jaewon - Questions and Answers
I have a little concern about the timing of your stimuli though. You used idle() to leave short intervals between the tones, but idle() is not that accurate timingwise because it has to redraw the control screen from time to time during the wait. So, if the tones should be precisely at 150-ms intervals, I would use a userloop function, create a long wav sound there, in which 5 tones are combinby Jaewon - Questions and Answers
You can do this as a temporary solution. putsample has to be called after STM is turned off. If you are sending out a 5V flat pulse, I would use a TTL. It is fast and easier. toggleobject(1); % TaskObject#1: crc(0.2,[1 1 1],1,0,0) ontarget = eyejoytrack('acquirefix',1,3,10000); if ~ontarget trialerror(4); % no fixation return end toggleobject(2); % TaskObjectby Jaewon - Questions and Answers
You don't need to (and should not) change anything if the latency test looks fine. It is just because your computer is too busy. Using a smaller resolution will be helpful, if you don't mind a little blurriness. I think 1 to 4 skipped frames are not that critical, although it depends on what you want to do.by Jaewon - Questions and Answers
Rewind the used sounds before turning them on again. https://monkeylogic.nimh.nih.gov/docs_RuntimeFunctions.html#rewind_object Sounds used to be rewound automatically when turned off, but they don't anymore. It is because now very long sounds can be streamed from files and rewinding them may take long while not everybody needs it.by Jaewon - Questions and Answers
One easy fix is to send out the waveform to the end (assuming that the waveform has a trailing 0) whether the fixation is broken or not. Do you need to stop it in the middle? In some applications, the last value should stay there, so I need to think about how to accommodate this.by Jaewon - Questions and Answers
Yes, that is the reason. I reduced the size by taking only 16-bit color formats, but it is still huge. Compression methods work only when the video is saved as a file and you cannot keep the compressed size once you load the video into MATLAB, so it is necessary to choose the frame size of the video wisely. I am thinking of exporting the videos as separate AVIs, but then we cannot keep them inby Jaewon - Questions and Answers
Just do not include any adapter that tracks behavior, such as WaitThenHold. You can program the same thing in the scene framework like the following. tc = TimeCounter(null_); tc.Duration = sample_time; scene = create_scene(tc,sample); run_scene(scene,20); idle(0); % Clear the screen. Not necessary if this is not the lase scene.by Jaewon - Questions and Answers
Please download the new version of NIMH ML and try the task attached below. Now SingleTarget gets the target position from the child adapter (CurveTracer in the attached code), if no Target is assigned.by Jaewon - Questions and Answers
* Changes in NIMH MonkeyLogic 2 (Nov 8, 2019) + A new adapter, AnalogInputMonitor, is added for online analog input monitoring. https://monkeylogic.nimh.nih.gov/docs_RuntimeFunctions.html#AnalogInputMonitor + During the I/O Test, the voltage range of each General Input in the display can be adjusted. To change the range, click one General Input panel and then click the currenby Jaewon - News
You can use WaitThenHold but make its HoldTime 0. Then, when the WaitThenHold succeeds, run another scene that just shows the sample image without checking any behavior. Does it make sense?by Jaewon - Questions and Answers