On-Demand | fNIRS for Mental Workload Assessment

BIOPAC citations are continually updated—current count is 50,900, search publications here.

Are you interested in getting started with fNIRS?
fNIRS is growing in popularity as a tool for studying human cognition including workload, memory, learning, attention and more. It is simpler than ever to incorporate into your experiments. Join BIOPAC for a compelling exploration of using fNIRS in your research. We will present a lightweight, ergonomic, and easy-to-use ecosystem of recording hardware and software. Plus, we will demonstrate each step of the process—from setting up the participant to running an experiment and analyzing the results.

You will learn how to:

  • Operate fNIRS equipment
  • Record high quality data
  • Perform an experiment
  • Analyze data
  • Synchronize with other physiological signals
  • Synchronize with stimulus presentation


About Alex Dimov  

Alex Dimov (BIOPAC Systems, Inc.) has been teaching workshops on the topic of physiological data acquisition and analysis for over 15 years. While at UC Santa Barbara he was an instructor for The Advanced Training Institute for Virtual Reality in Social Psychology. He joined BIOPAC as an application specialist and now oversees European Sales for BIOPAC.

0:04

Welcome, everyone, and thank you for joining us. I’m Brenda Dentinger from BIOAPC and I will be your moderator on today’s webinar: fNIRS and mental workload assessment.

0:20

Today we’re going to look at how to collect workload data using an fNIRS device or several fNIRS devices. We can bring that data into the software, COBI and AcqKnowledge to conduct some analysis.

0:33

And our speaker is going to show you how to do all of that today.

0:38

This is part one about workload.

0:41

If you’re interested in learning on how to add other signals, two fNIRS for workload, then you can join us for the next webinar on March 30th: Multimodal Workload, Combining fNIRS, ECG, HRD, Eye Tracking, and Stimulus Presentation.

0:59

Before we dive in into today’s presentation, I have some housekeeping.

1:05

All attendees are muted, so please submit your questions and your comments to the GoToWebinar control panel, that is how you communicate with me, your moderator.

1:15

When the webcam is turned on, if it looks small to you, you can grab this little gray bar right over here and expand or shrink it back down.

1:26

Today’s webinar is being recorded, and we’ll send each of you a link to the recording once it’s done processing. And finally, we have a survey at the end of the webinar. Please complete this survey so we can have your feedback and ideas for future webinars.

1:44

Now, onto our presenter, I’m excited to introduce Alex Dimov

1:49

Alex is an fNIRS expert here at BIOPAC, he also was the Head of Sales for Europe.

1:55

He has conducted over 100 seminars and hands-on workshops with BIOPAC over the past 15 years. Welcome, Alex.

2:04

Thanks, Brenda. Welcome everyone.

2:06

All right, I’m making you present are here you go.

2:10

All right, thank you.

2:20

As Brenda already introduced the title for today, we’ll look at functional near infrared spectroscopy in the context of mental workload assessment.

2:33

Probably a lot of you are already familiar with BIOPAC.

2:35

But just in case you are not there are over 40,000 citations with BIOPAC equipment.

2:45

We’re present, in pretty much all the top universities in the world. Of these publications, over a thousand of those are on mental workload and about a quarter are using functional near infrared spectroscopy.

3:03

And, we have solutions both for research and education at BIOPAC, and devices to stimulate various sensory modalities like Electrical Stimulation, temperature, olfactory, et cetera.

3:20

And then, to record physiological responses by changes in electrocardiogram, EEG, EMG, et cetera, transducer responses.

3:31

So pretty much everything you can think of capturing the response of the participant.

3:39

So, the plan for today is two show you the paradigm I’m going to work with and explain what options you have for setting up equipment, how to record high quality data, how to perform an experiment, analyze the data, and also say a few words about synchronizing with other physiological signals and synchronizing with stimulus presentation.

4:09

I’m sure there’ll be a lot of questions from you and we’ll take them during the polls and also when we finished with the main part of the presentation.

4:21

If you are not familiar with functional near infrared, I’ll just briefly overview what it’s about, we cannot really have the time to go much in-depth, so, but generally, the idea is that we’re using infrared light to measure changes in oxygenated and deoxygenated hemoglobin in the brain.

4:47

These hemodynamic changes are associated with neural activity.

4:53

So the way it works is we have pairs of sources and detectors. So we place a sensor on the on the head. There has pairs of sources and detectors.

5:05

We’re emitting infrared light into the brain and we’re using two different frequencies of light, because oxy- and deoxy- hemoglobin absorb the different parts of the light spectrum.

5:19

And then we’re seeing at the detector how much light was absorbed.

5:25

So, we’re using 730 nanometers and 850 nanometers light emitters and detectors, and then we’re also collecting ambient light.

5:36

This allows us to correct for changes in the ambient light situation like the sun outside or maybe simply, just the lights in the room and our position with respect to them. The distance between the sources and the detectors so right here, I’m looking right now at the source and the black disk is the detector, The distance is 2.5 centimeters.

6:07

And we also have what we call near optodes. There are also referred to as short optodes. There’s different ways to talk about those.

6:17

The key thing is here, we only have a centimeter difference. So we’re really only looking at surface changes, not looking into the cortex, but just looking at hemodynamic changes pretty much at the skin.

6:30

And this gives you some options afterwards if you want to use that data to further remove hemodynamic influence from the data that you’re recording from the cortex and what we’re measuring is relative change in micromoles per liter.

6:51

Now, this is a greatly slowed down video of the light emitter is activating, they activate in succession so that the measurements at the different sites will not interfere with each other.

7:07

OK, and as I mentioned, we’re imagining in-between the light emitter and the detectors. So, in blue here, we can see the areas that we’re measuring from.

7:19

So, it’s a small volume, in the cortex we’re not penetrating any more than about a centimeter into the cortex.

7:28

And you can also see 17 and 18 are the short or near optodes.

7:35

We’re placing the sensors on the prefrontal cortex, so we’re measuring, from the left middle and right prefrontal cortex, you can see here how this would superimpose on the brain.

7:47

So I’m going to go back and forth a couple of times so you can get an idea.

7:51

And this is what the sensor looks like when it’s placed in a band like this, and that’s what the sensor looks like, OK, we’ll cover that in a bit more detail later on.

8:09

So what’s the task that we’re using it for today?

8:13

The experiment consisted of two different tasks, short-term memory and digital transformation task, versus playing the FIFA video game.

8:24

And the short-term memory and digital transformation task is a higher workload condition.

8:30

And playing the video game, which was actually set at it’s easiest level, is the low workload condition. What was the task?

8:40

The participant would see three digits on the screen, like 505, for example, as illustrated here, then the participant has to remember the digits.

8:52

And when the screen refreshes, they have to add one to each digit and report the results. So they would click 6 1, 6, because the previous digits were 5 0, 5.

9:08

And this just goes on for a few minutes. There’s a number of trials, and the score is kept track of, so the participants trying to be as accurate as possible.

9:19

OK, after that, they play the video game, then they do the high workload task again, then videogame again, and so forth.

9:23

Just a few words about this task.

9:35

This was inspired by the Ed one task description in the book, Thinking Fast and Slow by Daniel Kahneman and here, there is a publication you can refer to.

 

How to create the task:

9:50

We use the Vizard software, which we use for virtual reality.

9:55

But it’s also very convenient in the context of an interactive experiment, because it allows us to send markers both to COBI (Cognitive Optical Brain Imaging Studio), this is the software we use for recording fNIR data and to AcqKnowledge, the software we use to record other physiological signals.

10:17

OK, we’ll speak a little bit more about that as well later on, but let’s jump forward and look at the results we got from this experiment.

10:32

Just to let you know what to expect.

10:34

Afterwards we’ll cover all the steps from setting up the equipment, recording data, and analyzing.

10:41

So this is a preview of the end result, and it’s data from only one participant. We only have so much time in this workshop so we’re going to only going to look at one participant’s worth of data.

10:55

So up here, we have illustrated on the screen, the oxygenation signal, which is, in this case, it’s the change between it’s the difference between oxygenated hemoglobin, and deoxygenated hemoglobin.

11:12

For the duration of the experiment, and we can see the different conditions between markers, 1 and 1.

11:18

We had the high workload condition, and we can see the oxygenation was going up pretty much throughout the task.

11:27

And we have a bit of a pause.

11:29

Then there is the video game and oxygenation dropped across all the channels.

11:37

Then, we have, again, the higher workflow task, oxygenation went up.

11:42

And then, we have the video game task.

11:45

So we will explore how to get to that point.

11:51

Another way of looking at the results.

11:54

We can average some of the data. We can average spatially, we can average temporally.

12:00

So here on the left, we can see a grand average. So we had these four different blocks two high workload and two low workload blocks.

12:11

And if we look at the change in oxygenation, from baseline at the beginning of each task, an average, everything. Average the blocks, average all the uploads we can see for the high workload task.

12:25

Here on the left, We have an increase of about two micro moles per liter compared to baseline.

12:31

And for the low workload task, we actually went below baseline. So it’s a bit difficult to see the numbers, but right in the middle was zero.

12:41

On the right side, we can see data for the different conditions.

12:48

We’re averaging the blocks, but we’re looking at all the uploads of the sensor.

12:54

And we can see that pretty much across the board, we are seeing big changes from low workload to higher workload.

13:04

The purple one is the higher workload task and the blue is the game. With the short optodes, it’s a bit all over the place and we will pretty much not expect anything.

13:19

The  reason why we have the short optodes is to correct for hemodynamic artifacts if we need to. OK, but right now, we’re not even going to use that.

13:32

OK.

13:33

So let’s take a break for our first poll and then we’ll go ahead and discuss the process of setting up the participants and getting good data.

13:46

OK, great, thanks, Alex. I’ll go ahead and launch the poll.

13:50

Are you interested and work workload, fNIRS, or both?

13:56

You all could just take a couple of seconds and answer that.

14:01

So I did wanna just remind everybody that the slide deck will be available for participants later.

14:06

I know that we had a question about that, and you all, if you have questions or comments that you want to pass online to Alex, are more than welcome to ask those in the GoToWebinar control panel. There’s a little questions pane there.

14:26

It looks like almost everybody has voted here.

14:32

I’m just gonna give it another 5 seconds or so.

14:40

Yes, Primarily people are interested in workload and fNIRS, it’s quite a bit over 70% are interested in workload and FNIRS combined.

14:53

And then quarter of the people are interested in fNIRS by itself, and only a little bit on workload. So that’s interesting.

15:03

All right, thanks everyone for participating. I’m gonna close the poll back to you, Alex.

15:08

All right, thank you, Brenda, and thanks for participating in the poll.

15:17

Let’s go over the various options that you have for performing fNIRS measurements.

15:25

The system consists of three components really. You have a sensor, you can see it here in the upper right, and that’s different sensors, that’s what they look like.

15:37

Then you have a cable.

15:38

The cables snaps onto the sensor.

15:41

Then the cables plug into the imager.

15:43

So you have sensor cables and an imager and the imager interfaces to the computer via USB unless it’s a wireless imager, in which case it sends the data wirelessly.

15:58

A huge advantage of our sensors is that they are extremely lightweight. And you can see, they’re very lightweight. They’re thin and flexible.

16:12

So these are different sensors.

16:13

So there’s three sensors, really, there is 18 optode, six optode and a five optode sensor.

16:21

So you can place them on the pre-frontal cortex, it’s very easy and quick to apply, it’s a fixed configuration. So you just put it on.

16:30

And we have cross system compatibility.

16:32

So there’s different imagers and the sensors work with pretty much all of them except for the educational imager which can only work with up to the six optode sensors, so the 5 or 6 optode sensor.

16:46

So on the left side, we have our high end imager which supports 54 optodes.

16:55

Then we have the C imager which is used for this experiment and that’s the physical footprint of theC imagers very small.

17:06

And then we have the E imager, the educational imager. So this is the educational imager and it has just a single connection.

17:14

So, it can connect only to the little 6 or 5 optode sensors, which is quite enough, for teaching students concepts about functional near infrared.

17:25

And then we have the mobile imager.

17:28

And what’s great about the mobile imager is that you can also place the imager on the arm and then the cables would go to the head and then you can take full advantage of the fact that the sensors are extremely lightweight and unobtrusive.

17:42

So, if you’re doing real-world daily tasks, having that sort of arrangement is very helpful.

17:52

The software can also interface to multiple devices, at the same time, they can be a mix of the various imagers, so, you can do hyperscanning and many people.

18:04

And placing the sensor, This is how we work.

18:07

So, you can see the sensor here, and we have a band.

18:12

That’s what the band looks like, but I put it in slides. So, it’s a bit easier to see.

18:17

So, we place the sensor in the band.

18:22

Then, we snap on the cables to the sensor.

18:28

And that’s actually in this little black attachment here is where we do the processing.

18:34

So there is a very short signal path from the sensor to the sensor head, where the electronics are.

18:44

So we attach the sensors and the sensor head to the cables. And that’s what it looks like.

18:49

So it’s inside and you put it on.

18:54

The whole process takes but a minute, and it has to be centered in the forehead and there is a notch on the sensor.

19:03

So you can do that.

19:09

You can even place it on yourself very easily.

19:12

So I’m just going to play back here.

19:15

We have this Velcro contraption in the back, which allows you to guide the cables and keep them in place.

19:24

So here you can see how I can place it, myself, of course usually which is placed in the participant, but if you want to test out things, it’s very good to have a system that’s just very easy to deploy and you’re up and running in a couple of minutes. And now I want to show you, the software

19:50

The software is called Cognitive Optical Brain Imaging Studio or COBI.

20:02

So this is us, this is a screen where we’ve already identified the equipment, but let’s just go back to the beginning. This is the startup screen and we can use the auto loads, so it will detect what equipment is connected at the moment.

20:25

So it finds the imager so, it’s the C imager it finds what sensor we have connected so it’s the 18 optode sensor.

20:36

We can also add additional data sources like the wireless imager or more C imagers, et cetera, if we wanted to get more.

20:48

We can get markers from the keyboard and the mouse That’s the default selection. But we can add more marker sources. OK, so we can use serial port. That’s pretty standard for stimulus presentation software, parallel port. We can also use the B and C port of the device.

21:10

So there’s a physical B and C port on the on the back of the imager which you can use it to receive markers. We can also receive markers over the network.

21:21

And that’s really nice, because the networks can then contain a lot of descriptive information, like ASCII codes, etcetera.

21:30

So, just over TCP/IP, we can send markers and one more thing that we can do over the network is we can also send the data over the network.

21:43

Again, over TCP/ IP, we can send the raw light data, or we can send the hemoglobin data or both.

21:52

So if you wanted to buy feedback experiments, for instance, a virtual world changes with your responses, that’s the infrastructure you need. We’ll go ahead and remove this Marker Source and proceed to the next stage. This is when we’re checking the data quality.

22:12

This is something that you do with every participant, because there’s variations in skin pigmentation, in skull thickness, et cetera. So you want to check the quality. 17 and 18 are the optodes, the short optodes these correspond to the measurements that are done pretty much at the skin.

22:36

So we can see the signals there and pay attention to the colors.  The blue and purple are the raw signals coming from the two different wavelengths and the orange is ambient light and this data these meaningless physiologically at the moment, it’s just the raw data and we just want to be sure that we get to be in the middle of the range. So you don’t want to be too high or too low.

23:02

So, these units are millivolts, so you want to be about, above one thousand millivolts and below, say, 3000. We can adjust the lights. So the LED current is how much light we’re sending into the brain. So, it’s already maxed out.

23:20

But you know, we could dial it down if we had to, and then we have the gain. We can change the gain of the sensors.

23:26

So if we increase the gain, the signal is going to go up, if we decrease the gain the signal is going to go down and the reference optodes stay at one. So now we’re going to increase the gain to seven.

23:40

Let’s disable the markers, increase the gain to seven and you can see the signals just went up right across the board.

23:47

The signals went up, but some of the channels are a little bit higher than the other ones. So you’ll want to have some individual control, so you can go to the advanced setup.

24:02

And now you can change individually the gains for the various optodes, and you can also change the LED currents for that, for the different lighting emitters. So, there we go. Now, we can see that the signal drop this more in the middle, by just modifying.

24:20

And I’m, I’m just going to change for, you know, for the sake of illustration, just lower, the current here, three times in one group here.

24:32

OK, so, you can see the signals just dramatically drop, we don’t want them to be anything like that, but I’m just showing you how you can make these adjustments.

24:42

So, that gives you fine control to optimize data quality for each participant. We just make sure that we’re happy with where the signals are ending up, and we also have to be mindful of sources of infrared light that may be getting into our signal like this right now is very clean.

25:12

However, in my office, I have the strong lights, so I just turn it on here and then turn it off against, like, directly above during this test and say, Infrared.

25:26

It’s very strong infrared emitter.

25:28

So, we want to make sure that we don’t have these big, ambient light changes.

25:40

At this point, we go on to the experiment, and we can either catalog the data, we can enter the experimental name, condition, information about the subject, or just go ahead and record some data, and it’s going to get logged. We do a baseline in the beginning, it’s ten seconds, but we could change that baseline later to be something else.

26:01

And we’re recording, you can see on the bottom, like green.

26:04

It says, recording. And now, we’ll go ahead and look at the oxygenation data, because we did the baseline now we can look at changes in oxygenation. The red is oxygenated hemoglobin, the blue is deoxy.

26:21

Now, we’re looking at all the optodes at the same time.

26:25

This, you can disable or enable markers. We can also cycle through the different possible signals. So we have now only deoxy hemoglobin is the blue toggle hemoglobin. and then we have oxygenation. There’s the difference between oxy and deoxy hemoglobin.

26:53

And as you notice, we also changed the time range, so we’re seeing all the data from the beginning.

27:04

That’s, that’s it. I mean, we’re at this point where we can, let’s say we’ve done with the recording, we finalize the experiment.

27:11

We can enter some description for the markers. Because their numerical markers. so we can associate them with some. We can enter some comments that will be saved together with data.

27:28

Notes about the experiment, for instance, and click finalize, we can view the data quickly, make sure everything looks good, OK, so good, and yeah, at this point we would be done with the data acquisition stage, and the next stage would be the data analysis, and the data analysis is our next segment.

27:56

So, let’s give our second poll.

28:00

OK, great, thanks, Alex. I’ll go ahead and launch the poll.

28:04

Would you like to use fNIRS for research, education, or both?

28:10

And I’m getting a lot of questions, might not be able to get out to all these questions today, but we do publish a questions and answers document at the end, and we send that along after a couple of weeks, when we have time to answer all the questions. However, let me ask you this real quick, from Jane.

28:28

What was the time duration between the different simulations that you showed earlier?

28:35

You mean for the experiment that we’re using, right? I’m not going to assume that this is the question.

28:40

So, we will be able to see it now, because I’ll open up the data and it’s actually minutes worth of data, and then you have we have a pause of about 30 seconds, and then we have several minutes, I think it’s 5 or 6 minutes, and then another pause. So we’re looking at a fairly long conditions in this particular paradigm, but you can also have paradigms where we have discrete stimuli.

29:03

And, and if you have discrete stimuli, then the question is very good, because then you have to be mindful of the fact that this is a hemodynamic response, so it’s slow, It’s about 7 seconds to peak. So you kind of have, like, stimulus and then get the response. If you have discrete, you need to leave yourself time and probably like good 10 seconds after the response you expect.

29:31

So, until the next stimulus, the protocol is definitely important.

29:39

We can discuss more later on, on that topic.

29:46

OK, I just closed the poll, so thank you all for participating, it’s mostly research people and then also a big chunk of research and education, so thanks for sharing that.

29:57

Back to you, Alex.

30:00

Thank you.

30:01

And I want always to make it clear that what I will do right now, let me open up the software here.

30:17

OK, the data that I will analyze right now, we can make that data available as a part of the webinar. I’m pretty sure that that is possible. Maybe Brenda can confirm that afterwards. But if you want to then try some of these things, you can use the same dataset.

30:37

OK, all right, so FNIRSoft is the software we use to analyze the data.

30:44

So we just click on Open and there’s a default folder in My Documents folder it just automatically COBI sends the data to it’s folder and then FNIRsoft looks for it there.

30:58

So by default, you can find the files and you can see here all the various sessions.

31:06

So we’ll open the NIR file. It’s actually this file can even be opened by Excel. It’s not proprietary or anything you can open this up with anything you have.

31:18

So, we have this NIR file. There is an oxy file which contains oxygenation, already calculated, and then markers. So let’s open it up.

31:28

And we want to load the associated markers.

31:32

So now, this is the data for the task that I described in the beginning. We have a higher workload condition from marker one to marker one. This is the add one task, the task where you have to remember a sequence of digits. Can you have to recall the digits whilst adding one to each digit? So we have a rest here, and I’m just going to click here, this is 34 seconds.

31:57

Here we are at 315 seconds.

32:00

We have about 5, six minutes.

32:03

And then, about 34 later, we begin the second condition.

32:07

It’s a little bit longer. That’s just because the game has a specific duration that it’s played for.

32:15

And then, we have, again, about five minutes. And then, in the game conditions are closer to seven minutes, if you want to, you can, of course, make these more evenly balanced.

32:29

This is an example. It’s not an actual experiment. Right. So, we’re approximately doing it.

32:38

So, on the bottom here, we have the ambient light, OK. And the ambient light in the greenish color, from the various sensors. You want this to be as low as possible and it kind of is pretty much at the bottom. This means we did not get interference from ambient lights, good thing.

32:59

The purple and the darker purple, or magenta, I’m not very good with color recognition.

33:09

So, the other waveforms are representing the raw data from the detectors for 730 and 850 nanometers.

33:21

So, right now, before we actually start to make sense of the data, we have to clean up the raw data and then calculate the oxygenation. So, let’s refine the data, like, I’m going down here on the bottom, clicking Refine.

33:38

So, we have a lot of options here, let’s click on Next.

33:42

We can design our filters. There’s already existing presets, but we could design additional filters with the filter design tool.

33:52

And in this case, I’ll probably just use one of the existing ones for the 10 Hertz data that we have. So, the imagery is working at 10 Hertz, the FIR filter.

34:06

It’s a low pass filter that we use to filter out hemodynamic components of the signal like cardiovascular and respiratory artifacts.

34:17

So if we apply this, we’ll remove those.

34:23

We can also apply an ambient light removal and this performs a linear subtraction for every optode.

34:32

Subtracting the ambient light data from each optode.

34:38

Let’s go ahead and run that.

34:40

So it almost didn’t change anything, you know, because we kept very clean data to begin with.

34:46

So let’s go back to Refine. And now we have a sequence of steps that we’ve performed.

34:50

So we’ve already removed the ambient light, next, I can perform the FIR filtering and before I do that, let’s have a closer look at the data so you can see what happened here.

35:08

Change the settings here, and I’ll zoom in.

35:12

I’ll go between, for example, 100 and 101 seconds.

35:19

I wanted to change the time. So I want to go between that 100 and 101 seconds.

35:30

So here, we can see how cardiac activity has influenced the oxygenation signal.

35:37

And of course, it will do that.

35:40

So, applying this FIR filter will essentially flatten that.

35:46

Now, for this kind of experimental design, where we’re looking at minutes worth of data, and we’re looking at grand changes for the whole condition, it’s pretty much irrelevant. We don’t really have to do that.

35:58

However, let’s do that so you can see how it’s done.

36:03

So, we can do the FIR filter, and this is what we end up with.

36:13

We could do further processing steps.

36:16

We could apply motion artifact rejection, but this dataset is extremely clean, so I’m not going to do that, median filtering, et cetera. So we’re pretty much good to go at this point, and we can click on Oxy.

36:33

And now we will calculate using the modified Beers-Lambert law will calculate the changes in oxygenation.

36:42

So we use the refined dataset, calculate oxygenation.

36:48

And here we go.

36:50

Now, we’re looking at oxy and deoxy hemoglobin.

36:57

The blue line is deoxy hemoglobin, the red line is oxygenated hemoglobin, and we’re just kept all the data from all the optodes here.

37:07

It can be a bit much to look at.

37:09

We can look at the optode layout view, and that gives us all the optodes. So I can click on a specific optode.

37:17

For instance, something here in the right lateral prefrontal cortex.

37:24

So, we cansee things without having all the lines superimposed on each other.

37:33

Let’s go to the display settings, we can change what we’re seeing.

37:39

So, oxygenated hemoglobin, deoxygenated, total hemoglobin, or I specifically want to look at oxygenation.

37:50

So that the difference between oxy and deoxy hemoglobin.

37:56

And here are the various optodes, 17 and 18 are the short optodes. Let’s have a look at those.

38:06

So, we’ll hide everything and look at the short optodes.

38:11

So they’re fairly flat.

38:14

We can see that their task isn’t changing too much on the surface of the scan.

38:21

If we invert the selection, now, we no longer see those, but we see the rest of the optodes and we can see that pattern increased oxygenation.

38:31

For the high workload condition between markers one-on-one, then you play the video game, decreased oxygenation.

38:38

And then, again, increase and decrease.

38:45

So, next, let’s obtain some measurements from the data.

38:51

So, we have to define blocks. The way FNIRsoft works is, you define blocks of data and save them into variables, and you process these variables to get results.

39:05

Since we have markers, these markers were entered manually by the experimenter, but they could have been sent over the network, or many other ways, like this is simply how it was done in this experiment.

39:19

So, between marker one and marker one.

39:24

We have the high workload condition, so I would just call it high workloads.

39:34

Or, let’s make it simple, just go with High. Hit run and save, and now we can see here, the blocks.

39:44

OK.

39:44

And then, in the space defined between marker two and marker two, we have the low workload condition.

39:55

OK, low workload: run and save, and on the bottom, we can see how these blocks were marked.

40:03

I’m doing everything manually right now, but we could create presets for defining these blocks, or we could generate a script.

40:12

So, FNIRsoft also has a scripting language interface.

40:16

So the entire workflow from beginning to the end, can be a script.

40:21

And you can run that script for all your participants and just plow through the data very quickly.

40:29

But right now, I want to show you how you would do this using the graphic user interface.

40:35

So, now we’ve defined our blocks, we can go ahead and save them.

40:41

So we’ll save the blocks, and I will correct the baseline.

40:49

So I will take the first 100 samples of data at each block to be the local baseline.

40:59

So we’ll be looking at the changes in oxygenation within each task condition compared to the first 10 seconds, OK, so, because we’re sampling data at 10 Hertz, 100 lines of data equals 10 seconds.

41:17

So, next, we saved the data in what we call the data space.

41:23

So now, we have variables.

41:29

Variables are generated for oxygenated hemoglobin, that’s the HBO, HBR, deoxygenated, hemoglobin, H P T total hemoglobin, and oxy is the difference between oxygenated and deoxygenated hemoglobin.

41:47

Let’s just select I’m holding shift and clicking here and viewing the data.

41:54

So now, with blue and purple, we have the high workload conditions and orange and brown. We have the low workload condition.

42:08

And we’re looking at change for each optode.

42:12

You can see on the bottom up to 1, 2, 3, up to 18.

42:16

We’re looking at the change from its local baseline. So, what happened from the beginning of the experiment on?

42:22

And you can designate the baseline in in a variety of ways. But that’s the analysis that we’re doing right now.

42:30

So, we can see quite a bit difference, right?

42:34

For some, for some of the optodes, we have different patterns, and you would expect that.

42:40

And for the, for the short optodes, there isn’t much going on there.

42:49

What else can we do?

42:51

So we’ve generated this sort of view, but we can process the data in here.

42:56

So we can take those variables, for instance, we can take, these are the two high oxygenation variables. Let’s put them in here.

43:08

And we can perform actions on them.

43:11

So we can perform temporal processing, So let’s do the mean within the blocks.

43:20

OK, save that, and we’ll call this High.

43:28

High workloads, OK, let’s execute, and we’ll do the same for the low workload conditions. So clear the variables, and now yake low workload condition.

43:42

Change the name of the resulting variable, too low.

43:49

Go back to the data space.

44:07

So, let’s change the action. So, we actually wanted to do the mean across the blocks here, not within the blocks, so we’re essentially end up with the same thing when you do that.

44:20

So, let’s do the mean within the blocks.

44:33

So, let’s clear the variables that we already have.

44:39

First of all, OK, let’s go ahead and delete those, delete the select the variables, OK, we’ll come back here to the process, and add them again. So block one and block two, select and remove palettes into the actions here.

45:03

So we want to do processing, averaging.

45:13

And mean, across the blocks.

45:27

OK, so now we have a single variable here, Process. Now we have for all the optodes, for the 18 optodes, since we had two conditions, two blocks for each condition, we’ve just averaged the data across the blocks.

46:00

So, we’re instead of seeing four bars for each optode, we’re seeing two bars worth of data. We can take this a step further.

46:12

So, we can go back to process.

46:15

Lets clear, take the variables that we just produced, high and low.

46:24

And now we can perform spatial processing.

46:27

So we can average within each block.

46:33

OK, and let’s execute that.

46:38

And let’s view, so, again, we have the high workload in purple and the low workload in blue.

46:50

That’s not, of course, typically what will happen. You would have data from many participants.

46:56

And, so, you have all these variables and once you run the experiment, you would save the variables and then when you have data from 10, 20, however many participants you are running, you would then load variables, OK. So, we would save the variables after each participant and then afterwards we will load all the variables.

47:19

And when we do that, we can perform statistical analysis right within the software so we can do  T-test and all the et cetera.

47:30

But you need more data to do that.

47:34

One more thing I want to show you is the topographic.

47:38

So, within the software, we can load a variable.

47:42

So I’ll load changes in oxygenation for one of the high workload conditions.

47:54

And a lot of windows at the moment, and we’ll overlay that onto the brain.

48:08

And we can change thresholds here, if we need to. We can play it back. So this is the time course of the condition. We can accelerate that, for instance, like accelerating five times.

48:21

So we can see how the oxygenation was changing.

48:25

We can generate a video out of that.

48:30

But typically, what you would do is when you have all these variables from the experiment, you would calculate variables that represent the statistically significant changes, And you will display those. So then this sort of utility becomes really much more meaningful right now. It’s just illustrated.

48:50

So we can see what happened is the experiment went out and in the beginning, not much. And then it just kept increasing.

48:59

OK, well, let me navigate all my various windows here, and we’re onto our next poll.

49:19

Yes, Hi, Thank you. Let’s see here.

49:22

We’ve got a poll about the areas of the brain you would like to measure.

49:30

So, let us know what you’re thinking, and I think this is not a complete list, right, Alex? There was a couple of areas that, we didn’t have a limitation in our polling system. We, So, you know, we just chose some of the brain areas. Unfortunately, we can’t fit them all. We don’t have that many options.

49:51

Yeah, so if something’s missing from the list, feel free to shoot us that, that area, we’re happy to, know, share, that, have you share that data with us.

50:03

It seems like it’s mostly frontal and motor at this point.

50:08

OK, so let’s see here, OK, here’s a good question, lots of good questions here. I don’t have time for all of these questions.

50:14

But Erin asked if there are any issues. No, not that one. I’m sorry.

50:25

So, everything’s shifted. Oh, here we go. Here we go.

50:32

Ashith asked about How stable is the fNIRS in terms of motion, noise, error, when worn over a head mounted display.

50:44

And Erin, we’ll get to your question a little later. So, somehow, is that when worn with a head mounted display is the question.

50:52

So it’s not very different from the typical motion artifact you would get when you’re wearing the device in regular use.

50:59

So if we’re if I’m doing something like this, like, moving my eyebrows up, and down, or giving strong facial expressions with an HMD or without an HMD, that will result in an artifact.

51:13

The good thing is, because it’s the hemodynamics, signal these artifacts are easy to find and to eliminate. But you will lose those parts of the data when that happens, because that’s just how it is.

51:27

It depends also on what is the head mounted display, like how comfortable of a fit it’s going to be.

51:38

So, But, generally speaking, that the sensor is very thin, and so, it works well with most head mounted displays, I think you should ask separately, you should ask us about the specific display that you’re using, like how it’s going to fit with. With the sensors, we can give some information, but I don’t expect additional artifacts in general because you have a head mounted display, you know, unless the fit is a little bit of a problem and like there is some inertia, right?

52:09

Moving it, if you’re having dramatic movements.

52:18

OK, great. Well, thank you. and we’ll get to Erin’s question next and I see the other questions, and we’ll get to as many as we can later.

52:29

So back to you, Alex.

52:33

Yup.

52:34

I mean, and just in the context of VR, so we have people who use it in caves. We can use it in with HMDs.

52:43

So it just all very specific, like have to look at it case by case, OK.

52:50

The experiment that we looked at right now, this paradigm did not really have discrete stimuli. It was just condition one and condition two, condition one, condition two. But I know a lot of people are interested in marking discrete events.

53:05

And then looking at the response for that, for that event, and maybe averaging like a number of similar events.

53:12

And so it becomes very important to be able to mark when a stimulus appeared.

53:20

So this task that that I used for this experiment, the Add one task, where, actually created in the Vizard software, which we use for virtual reality. And let’s take advantage of the fact that we can send markers.

53:39

Then we can send markers to the COBI software and also to the AcqKnowledge software, OK?

53:48

So this gives us the ability to have discrete event markers.

53:56

When you’re using interactive software, of course you have a lot of information you want to broadcast about these markers.

54:03

The markers can contain information on the number of digits that the person has to remember because the number of digits may be different, the number that you have to add.

54:14

So, you know, you can, you can change all these parameters to modify the difficulty of the task.

54:20

So, you may want to go to five digits, and you have to add three to every digit, which is a different kind of task, and you want to be able to mark that in the software. So, here, we have like, how we can change these parameters.

54:36

And then, the task begins and the markers are being sent.

54:41

And, this is this is what happens in the COBI software the cognitive optical brain imaging software.

54:51

So, as we are recording the oxy and deoxy data, we’re listening for markers over the network, and markers just arrived.

55:05

And, now, here, we have some more markers and they can encode information, these are ASCII codes, so, 51 and 49 are actually the ASCII codes for the numbers 3 and 1. So, this is how we are encoding that. We have the condition where we are remembering three digits and adding one to every digital. And 84 is the ASCII code for T, so true. So the participant responded correctly.

55:38

And this way, we can keep track of whether they were getting accurate or inaccurate responses.

55:48

OK, and so I think the next one here, I think, is incorrect, OK, There we go. So that’s 70 seventies, the ASCII code for F, So false.

56:00

So during the workload experiment, if the participants committing a lot of errors, you may find that the oxygenation levels are actually not increasing then they’re almost like shutting down. Maybe it’s too hard, Maybe they’re just not trained for the task yet.

56:16

So knowing the rate of responses is very helpful.

56:20

And because you can send these markers, it really helps analyze the data afterwards.

56:28

And now we can do the very same thing in AcqKnowledge

56:32

So here, we’re recording, electrocardiogram, and respiration, and at the same time we’re receiving information from markers here. So the light bulbs are the stimulus presentation, and these check marks are the responses.

56:53

So in the AcqKnowledge software, we have some extra capabilities that allow us to have different kinds of markers, as well as having the labels of the marker. So here we have a true response.

57:10

And then going back a little bit, you can see here, three underscore, one that’s condition of three digits. Remember one digits, OK?

57:20

So we can then analyze that data.

57:25

Couple more things I want to talk about is hyperscanning options.

57:30

So, recording from multiple people because we can, you know, we can synchronize physiological data like ECG and fNIRS and this will be the topic of the next webinar getting, increasing the number of modalities that you are working with.

57:51

So here is a paradigm where we’re taking advantage of the capability of the COBI software to stream in real time the raw data as well as the oxygenation data.

58:02

And our AcqKnowledge software can record, for example, heart rate and stream that as well. So, all that data is streamed over the network. And then it goes into a central application again. We have an example here written with Vizard, and Vizard is just running on Python. So Python is very easy to work with. So we use it for a lot of these paradigms.

58:26

We’re happy to share the source code for how these things work with you, if you’re interested, let us know.

58:34

So just take a step bigger so you can see here, we have four people.

58:39

And for every person, we can monitor the change in oxygenation from baseline and make it, for instance, change the color of the people. So it’s sort of an index of how they’re performing and we’re also monitoring the heart rate.

58:55

But let us say their own virtual reality, We could change the lighting of the room, we could change what happens if we can do all sorts of things like use it for navigation et cetera. So Let’s go a little bit forward here.

59:10

So we’re enabling both AcqKnowledge and the COBI software and you can see now data from AcqKnowledge from four people.

59:26

It’s going straight over the network.

59:30

It’s going into the application that we have made in Vizard OK, and then it’s also taking the data from the COBI software. And let’s go back a little bit here.

59:44

I want to show you something very important about the COBI software. You can have all sorts of devices, and not only real devices, you can use virtual devices, that are taking data from a file that you’ve already recorded, and you can do the same thing and AcqKnowledge.

59:59

So for development, for any sort of biofeedback application you don’t need to have people connected. You can run off simulated devices for all of this and it could be multiple devices, very powerful if you’re going to do that sort of paradigm.

1:00:18

And, the topic for the next webinar will be about multimodal approaches to workload, so, we’ll combine eye tracking, physiological signals like electrocardiogram, function or near infrared, and obtain a much more complete picture about the workload state of individuals.

1:00:41

Time for our next poll.

1:00:48

OK, great, thank you, I’m going to launch this poll, what other physiological single signals are you interested in?

1:00:56

OK, so back to Erin’s question, Are there any issues problems that may arise from using different gain settings for the different channels?

1:01:07

No, the great thing about this, that these are relative changes from baseline, so the only concern is that you don’t increase amplification so much that you’ve reached the limit of the sensor and you don’t keep it so low that you can’t detect the change. Which is why we are, we can change the gains for the individual optodes to dial in to be more in the center of the range of the system. We’re measuring relative change, so what the absolute gain is doesn’t matter.

1:01:41

And there can be differences in skull thickness, for instance, and you need to have this sort of flexibility to overcome that.

1:01:52

OK, Great, All right, so it looks like most people are interested in eye tracking and EEG, and then a bunch are also interested in ECG HRV, Not much, not too much interest in EDA, and a little bit of interests in blood pressure

1:02:14

Thank you all for, for participating in our fourth poll/

1:02:19

And, Alex, I’m closing the poll, Do you have more that you want to share?  Definitely. I just want to mention. I mean, one of the reasons for why we do these polls is that we know what we should show you in the next webinar.

1:02:35

So if most of the people are interested, the EEG then, of course, we’ll make sure to talk about that as well in the upcoming webinars. So rest assure that we’ll cover what people are looking for in terms of these multimodal approaches.

1:02:52

So we have very good solutions that can be used at the same time on the head, combining EEG and a pre frontal function or near infrared.

1:03:04

OK, so are we ready to jump into the final Q & A?

1:03:09

Yes.

1:03:10

All right, I will make myself presenter here.

1:03:15

But if I need to jump back to you, if you have to show something else, just let me know. I’ll have the software up and running and everything’s if we need to. Yeah.

1:03:23

OK, great, Alright, so, all right, so this goes way back to the beginning of the presentation. Thank you for your patience, and I don’t know if I’m gonna pronounce her name right.

1:03:32

It’s Bijoia.

1:03:35

How long should each experiment be at a minimum?

1:03:41

That is a difficult question. It depends on the strength of the experimental effects that you’re going to achieve, like, continuous or discrete.

1:03:53

So, but your main constraint is, with the fact that this is the hemodynamic response. So if you were doing a skin conductance experiment and you have a single test, like, to present an image. And a few seconds later, you have a response.

1:04:09

Or here, the response speaks about 7 seconds after stimulus and the stimulus is when you’re talking about cognitive activity.

1:04:18

It’s not like the same animal as getting a start, though and a physiological response. It may take some time to process information that you’re seeing. And so like maybe you’re presented with a question you’re thinking about the question answering it.

1:04:36

So you need to allow time.

1:04:40

And that really depends on the protocol.

1:04:42

So I can’t answer the question directly but you need to give yourself generally quite a bit more time then you would with measures such as the EMG or skin conductance, et cetera. And you need time after each, if you’re doing discrete trials, you need time after each trial to sort of return to baseline. So conscious fire one after the other really.

1:05:07

OK all right, thank you.

1:05:13

So this is Ankibi. Why would one change the amounts of lights.

1:05:23

Well, OK, This is most like we’re referring to the amount of light we’re sending into the brain Right, so OK.

1:05:32

This again depends, so they’re different factors influencing how much light pass can penetrate and then come back.

1:05:44

So if you have skin with lighter pigmentation, and then you have, for instance like thinner skull, or you’re very young. For instance, maybe you are working with the neonatal population, et cetera. You want to reduce the amount of light, otherwise the amount of light will come back to you is just going to saturate the sensor.

1:06:08

So, we always want to send in as much light as possible, so we can then get the best possible contrast, essentially. But you need to have that flexibility to change and the amount of light that’s getting in is pretty much like equivalent to, like daylight.

1:06:30

So we don’t even,we were talking about fine adjustments here to an already very low amount of light that we’re getting in.

1:06:40

OK, Great, So now we have a question from Sharon Sho Chang.

1:06:46

How would we determine the right position for the head probes?

1:06:52

Well, this depends on which part of the brain you’re measuring from now when. When you’re using the 18 optode sensor, f you align it up like exactly in the middle and just above the eyebrows. We have a publication we can refer you to about how this when you have this sort of placement, you can map it onto an average brain.

1:07:16

But of course, everybody’s a little bit different so you can get fairly decent accuracy for when you’re averaging across your population of participants. But if you want to be really accurate, then you need a structural scan of the person.

1:07:39

But that’s usually not really needed here, right? These are large areas that you’re measuring from. So you either use like a big sensor that covers everything.

1:07:50

Or you can have some of the smaller sensors and trace it over a specific area of interest, because in all likelihood, unless you’re doing like exploratory research, you are looking for a change in a specific brain area. So that’s where this, the small sensors are very convenient. So you’re looking at, like the right lateral prefrontal cortex and you place like, right there, right?

1:08:17

In the little sensor, it’s also gives you a bit more flexibility to move up and down, et cetera.

1:08:25

So, depends on the Researcher. Really, That’s kind of the answer.

1:08:32

OK.

1:08:35

What language can the script be written in?

1:08:43

So, let’s open up here. Script Editor.

 

1:08:59

OK, so this is the scripting language. So, we can open up, there’s some, there’s some sample scripts in here.

1:09:15

And we have all the commands in here. So it’s a custom.

1:09:21

Basically you can use the existing scripts and then you can learn how to use that FNIRsoft scripting language.

1:09:34

I believe it’s its own language.

1:09:40

But by using examples already existing, you should be able to get up and running fairly quickly.

1:09:50

Here, the idea is you’re not doing incredibly complicated things, but you’re automating procedures that exist in the data that exist. That exists already in the software.

1:10:00

Now, if you want to do much more advanced data analysis, you can go back to the data space. Let’s just find.

1:10:14

Our data space here.

1:10:16

So, data space, OK, and we can export the variables to MATLAB.

1:10:25

So, you have really a number of options here. FNIRsoft, however, is very powerful, very easy to use.

1:10:34

So, if you can do your analysis in here great.

1:10:39

But if you want to do something more custom, then you can do that.

1:10:46

OK, all right, how about: Is there a way to acquire real-time calculated oxy and deoxy hemoglobin concentrations in third party softwares?

1:11:02

Yes, let’s open up the COBI software here and let’s go, if we go to broadcast here.

1:11:13

So, let’s just create, Use File.

1:11:17

So I’m going to use an existing file, and then go to broadcast from the network.

1:11:26

So here we go, we can send the raw live data, or, we can send the hemoglobin data.

1:11:32

So, we actually going to get, for every optode, we’re going to get oxy or deoxy, we can send everything and then we can just parse that data that comes in.

1:11:42

And when you, when, when you do this option to send the hemoglobin data, the baseline is taken at the beginning, like, the very moment you begin the recording.

1:11:56

And then it’s all relative to that baseline, So if you want to make further baseline corrections, then you will do those in the software that’s receiving the data, OK.

1:12:15

All right, so a bunch of people asked about sanitation techniques between participants and it’s basically an alcohol wipe that you would use in-between. Yeah, I’m gonna just open up our FAQ here. I think we probably cover it somewhere in our FAQ, but, yeah, if want to, of course, you can just gently clean the sensor.

1:12:42

That, I think that should be good enough.

1:12:48

OK, great.

1:12:50

And what part of the brain is, This is from Ankibi, what part of the brain is associated with, each optodes, does it, based on Broadmen’s area of the brain? Yup.

1:13:01

So, secure in our FAQ, we addressed that.

1:13:05

So, generally speaking, these are the problem areas, 10, 9, 45, and 46 can be reached by our prefrontal cortex sensors.

1:13:18

As far as the exact correspondence, I believe we have a table somewhere. If not, we can include, that will probably include that in the document that we create after the end of the webinar, So did you know, based on standard placement, where these problem areas will fall. That’s, of course, assuming you’re placing it.

1:13:39

The standard way that I describe exactly in the middle and exactly above the eyebrows for any other placement, then you just have to mark where you are placing it so you know where you are.

1:14:00

All right.

1:14:01

Looks like that’s about everything, that we can answer those questions on here, that are more in depth, the answers that we need to go into, so we’ll provide that Q&A document that we’ll send out, and a week or two. And I just want to say one more thing, this is still upcoming later on in the year We will set a date for that soon, as well.

1:14:24

But there will also be a webinar on the educational aspects of how to use fNIRS to teach students.

1:14:34

So just keep that in mind because I think some of you answered that you’re interested in both in research and education.

1:14:41

And so this educational imager that we have, the idea is that it’s very low cost.

1:14:48

And students can, we have lessons that we have created so students can record fNIRS complete protocol.

1:14:59

So, that’s definitely, I think something that people should know about.

1:15:06

Yeah, so, I’ll just show you real quick on my screen here that there’s, know, the fNIR 2000 systems.

1:15:14

There’s the mobile device. Someone asked earlier about the size of that mobile device, you can see it on the arm here.

1:15:22

And a bunch of people also are just asking about the citations and the documents that you were referencing, so if you could send me those things. I can include that in a follow-up e-mail to everyone.

1:15:34

We have a wonderful support department, so people who are setting this up, you know, for the first time or have questions about how to set up of course we’re here for you, we offer free support.

1:15:44

And then we have more advanced support, custom packages, if you need custom work done.

1:15:50

If you’re wanting to reach out to your salesperson, we let the salespeople now that you’ve attended this webinar.

1:15:55

So they’ll reach out and see if you have additional questions. But a great way to reach us quickly is to just go right here on our website.

1:16:04

Request a demo or if you scroll down to local sales, Aimee Walker is my rep because I’m in California.

1:16:10

But you will see your representative regardless of the country or an, or contact information, the best contact information, and get ahold of us, so that’s how you can reach out directly to your app.

1:16:21

And I want to just share that we do a lot of webinars every year, and you can access those on demand webinars here on our webinars page, live and on demand events.

1:16:36

And you’ll see that we have a few others on fNIRS as well. But we also have the upcoming webinars for you to review, and of course, Alex mentioned the one coming up on March 30th, Multimodal workload and adding other signals. So we hope you’re able to join us for that, you can just get to that webinar’s page and sign up if you haven’t already.

1:17:01

Alright, Well, just a reminder that today’s webinar was recorded, and we will e-mail you a link to today’s recording along with the slides, and the Q&A document. That Q&A document will take us.

1:17:13

You know, have a process, but we’ll get that out to you as soon as possible, along with links to those resources that Alex mentioned.

1:17:21

When you close the webinar window, you will see a survey. Please complete that so we can have your feedback.

1:17:27

Will also send us and one of the follow up e-mails from GoToWebinar.

1:17:31

And you may visit BIOPAC.com for additional resources, screencast on specific features and analysis tools, application notes providing step-by-step instructions, and, of course, additional future events, webinars, and in person events, we look forward to seeing you all in person. We miss you all.

1:17:53

We miss having our one-on-one face-to-face meetings, so, Alex, anything else before we conclude?

1:18:02

I just want to also say that in your comments, when you’re filling out a survey. If you’re interested in more content for us to create on the topic of functional near infrared, for instance. Maybe a more in-depth analysis webinar, et cetera, let us know.

1:18:20

That’s also something potentially under consideration.

1:18:25

OK, great, well, thank you all. And if we didn’t get to your question, we’ll do our best to get answered for the Q& A document.

1:18:33

All right, that concludes today. While today’s webinar, I thank you all for participating. Thank you, Alex, for all the work you put into this presentation.

1:18:41

Have a great day. Stay safe, everyone.

1:18:45

Goodbye.

1:18:46

Goodbye, and thank you, and hope to see you about two weeks from now.

1:18:52

Sounds great. All right. Bye, everyone.

 

WHAT'S NEW

New Citations | BIOPAC in Biology Research

Biology research covers a wide variety of studies all aiming to understand living organisms...

Join the BIOPAC Community

Stay Current

Stay Connected

Request a Demonstration
Request a Demonstration