Skip to main content

Combining biosignals with RFID to develop a multimodal-shared control interface

Abstract

The paper presents an approach to develop assistive devices by combining multimodal biosignals and radio frequency identification (RFID). The brain and eye signals have been used as multimodal biosignals to control the movement of a robot in four directions and help reach near the object following a predefined path. RFID shared control over object identification, and the gripper arm connected at the end effector of the robot performs pick and place operations. Horizontal electrooculography (EOG) has been used for x-directional movement control and electroencephalography (EEG) signal obtained by visual stimulus, called steady-state visual-evoked potential (SSVEP) has been used for y-directional movement control of a robot. The SSVEP signal has also been used to ring an alarm in case of an emergency call by the user. Two parameters classification accuracy (CA) and information transfer rate (ITR) have been calculated for the performance evaluation of the proposed multimodal-shared control model and have shown improved results as compared to previous literature. The results also proved that the proposed model can be used for real-time mobility assistive applications.

Introduction

Human inherently interacts with the world in multimode. A multimodal man–machine interface (MMI) system can be a combination of two or more systems that may be independent or dependent on each other. The mono-modal systems can be EOG, electromyography (EMG), EEG, etc., if a biosignal-based multimodal interface has to be designed. The multimodal MMI systems give us an opportunity to choose separate modalities for distinct tasks which make the system more flexible and independent that provides more functionality and an increased number of control commands. In a multimodal MMI system, users need not do the same operations continuously, which means less mental and physical demand and high usability [1]. There can be a massive opportunity for combining more modalities in a single system to make a multimodal system. The multimodal interface has opened many new opportunities for more intense applications in different areas [2, 3]. However, the increase of control signals in any biosignal-based MMI systems whether it is single modal or bimodal may create fatigue and exhaustion in users and result in the incapability of real-time control applications. Sometimes, expert assistance is needed to interact with the MMI system which is exclusively controlled by biosignals.

To further increase the control commands and to make a robust system without causing more fatigue in users, a shared control architecture can be incorporated. Shared control can be used to support the bio-signal interface to improve the control of the device by the user in tasks related to tele-operation, tele-manipulation, and assistive robot control applications. As traditional assistive devices and rehabilitation systems use keyboards, joysticks, or other traditional user interfaces, more advanced hands-free MMI systems are necessary. The combination of EOG signal and RFID technology has been used in shared control mode as an assistive device in recent research works [4]. In the shared control approach, the control tasks for a predefined application are shared between MMI and some other intelligent systems such as RFID technology, IR sensor, and computer vision. RFID technology does not need a line of sight which makes it more useful for tracking and object identification applications. A higher detection range and more information can be written in the RFID tag for more precise identification of objects.

Shared control with a multimodal interface can be a novel approach having multiple independent or parallel controls without putting much effort into it by the user. The approach can enhance the number of control commands and improve system performance for complex control of some external device or application [5] with higher ITR as well as system usability with less physical and mental demand by the user. A high-frequency SSVEP-based BCI has been combined with computer vision as shared control to perform robotic arm control applications with multiple degrees of freedom without moment-by-moment supervision by the user [6]. Recent research showed that the success rate for pure hybrid BCI was 50% only and 85% for BCI with a shared control approach for a specific robotic control task [7].

Combining the characteristics of human and machine intelligence, it had obvious advantages compared to the direct control of the MMI interface. It combines the human-level planning and machine-level fine control to achieve better control effects by reducing human error. Shared control can interact with the surroundings and reduce the workload of the user continuously sending instructions. Reduction in the workload of the user can reduce fatigue and consequently increase the overall performance of the system. In this study, a multimodal-shared control interface has been proposed by combining EOG and SSVEP biosignals with RFID. The proposed model has been used to control the movements of a developed prototype robot for pick and drop application.

Methods

The experimental procedure for designing an EOG-SSVEP-RFID-based multimodal-shared control interface model has been discussed in this section.

Data acquisition

EEG (SSVEP) and EOG signals were acquired from nine healthy participants (aged between 21 and 34 years) having normal as well as corrected to normal vision using g.USBamp (a biosignal amplifier provided by g.tec) and active Ag/AgCl wet electrodes. The sampling rate was chosen as 256 Hz for both biosignal recordings. The in-built bandpass filter (having cutoff frequencies of 0.5–30 Hz) removes the effect of baseline drift and eliminates high-frequency noise. The acquired data has been sent to the PC via USB for further processing and analysis. All nine subjects did the same actions several times in three sessions to obtain statistically significant results. The volunteers have been informed about the experiment as well as the data acquisition procedure, and consent has been taken before taking their data. All the subjects were voluntarily nominated and had given their consent to participate in the experiment. There was no specific eligibility criterion for the nomination to make a generalized system. However, subjects with pacemakers or electrical stimulators have been avoided while using the g.tec BCI machine. All data has been acquired in the laboratory at the National Institute of Technical Teacher’s Training and Research, Chandigarh, India.

In this work, horizontal EOG (HEOG) signals were acquired using only one electrode placed on the left or right side of the eye. For EEG (SSVEP) recording, three positions “O1,” “O2,” and “Oz” in the occipital lobe of the scalp as per the international 10–20 system were chosen [8]. Since visual stimulation has been used in the present work, the occipital lobe has been chosen for signal recording. One common ground electrode was placed on the forehead, and the common reference electrode was placed at the right earlobe. All the electrodes have been placed as per the international standard positions for recording horizontal EOG signal and SSVEP signal as shown in Fig. 1.

Fig. 1
figure 1

Electrode placement positions for SSVEP (right) and HEOG (left) recording

While acquiring the horizontal EOG (HEOG) signal, the user has to move his/her eyes in horizontal left and right directions alternatively and without blinking. To acquire the EEG signal, users were sitting on a chair comfortably keeping g.SSVEPbox (stimulating device provided by g.tec having four LEDs flickering at four different frequencies) in hand. They were instructed to fixate on four LEDs one by one flickering at frequencies 10 Hz, 11 Hz, 12 Hz, and 13 Hz sequentially. As shown in Fig. 2, for SSVEP recording, each trial started with a 2 s rest and visual cue (green LED) indicating a target stimulus. The cue appeared for 3 s on the screen. Subjects were asked to shift their gaze to the target within the cue duration. Then, the stimuli started to flicker for 7 s. Therefore, each trial lasted for 10 s. The acquired data has been sent to the PC via USB for feature extraction and classification.

Fig. 2
figure 2

The timeline of a trial for SSVEP data acquisition in the experiment

A multi-threshold-based algorithm has been developed to distinguish two HEOG signals. The algorithm is as follows:

  1. (i)

    First of all, the algorithm checks the amplitude of the signal.

  2. (ii)

    The signal with very low amplitude has been discarded as it could be noise or disturbance in the signal. This step will also smoothen the signal.

  3. (iii)

    If the signal amplitude falls in the range of eye movement direction, it checks the pattern of the pulse in the next step.

  4. (iv)

    If the signal amplitude and pattern of the pulse, both conditions are satisfied for any of the two movements, the eye movement is detected and the corresponding control signal (5 or 6) is generated.

  5. (v)

    The generated control signal remains high for the next 2 s even if the signal value changes, to minimize the error due to small fluctuations.

  6. (vi)

    If both conditions are not satisfied for any movement, then it can be either a blink signal or it may be due to some artifact and interference. That will be discarded and control signal 0 will be generated.

With the help of a multi-threshold-based EOG detection algorithm, horizontal EOG signal has been detected and classified. The algorithm has a discrete output in the form of a number (5 and 6) that corresponds to eye movement in a certain horizontal direction either left or right. Further, the device driver will convert these classification outputs into commands and will send it to control the prototype robot which moves and gives the feedback back to the user.

EEG interface uses minimum energy (ME) combination for feature extraction as it requires no training, and itself finds the best combination of channels [9]. The SSVEP configurations use features based on spectral analysis of the EEG. ME combination-based method uses the SNR as a clue of the stimulation frequency. The extracted features were the SSVEP amplitude (amplitude at stimulus frequencies) obtained from the frequency domain spectrum by an autoregressive model. The base EEG is assumed as white or pink noise. Therefore, the SNR is calculated as the ratio of the power density of a specific frequency and that of estimated noise. The analyzed spectrum is calculated from combined channels that result from the spatial filter operation of the ME combination method. These combined channels accommodate an enhanced SNR of the target signals. The overall process works on 3-s windows (768 samples) with an overlap of 717 samples and consists of three steps: pre-processing, classification, and change rate/majority weight analysis. These three steps are executed five times a second to have a new output every 200 ms. LDA classifier has been then applied to classify the SSVEP signal based on the feature matrix obtained from the ME combination. All the experimental procedure and analysis was performed in a MATLAB/Simulink environment.

EEG-EOG multimodal interface

The EEG interface or EOG interface does not impact physical fatigue on the user if used for a short period. However, constantly looking at LEDs flashing at different frequencies or continuously moving eyeballs in different directions can create exhaustion and can affect the performance of the system. Combining EOG and EEG modalities to develop multimodal interface can complement each other and compensate for individual limitations. An EEG-EOG-based multimodal interface model has been developed in MATLAB/Simulink environment. It works synchronously in two modes. EOG mode is used to move the robot into x-axis (left/right) and EEG mode to move the robot in y-axis (forward and backward). EEG mode was also used for emergency calls from users.

A predetermined EEG control signal is used to facilitate smooth switching between two modes as explained in Fig. 3.

Fig. 3
figure 3

Flowchart for EEG-EOG multimodal interface system

The flowchart in Fig. 3 shows the conditions for smooth synchronous switching between two modes.

EOG-SSVEP-RFID-shared control interface

A multimodal-shared control interface has been designed and developed by combining EOG-SSVEP and RFID for controlling a robot in real time.

The EEG-EOG-based multimodal system controls the robot’s movement as explained in Fig. 2. The classification outputs from the Simulink program are sent to the robot via Arduino controller. RFID shares control over object interaction. A camera has been used to record the movement of the robot to measure the time to complete the task.

A high-frequency RFID Reader MFRC522 Module is used in this work for shared control. Its wireless operating frequency is 13.56 MHz, and its maximum data transfer speed is 10Mbit/s. The reads’ range is approx. 30 cm from the tag. It is a low-cost chip-based board. RFID 1 K Key Fob tag was used in this work which requires a 3.3 V power supply. It can be directly connected with Arduino through an appropriate pin connection.

Prototype robot design and development

A prototype robot with a gripper arm has been designed and developed in the laboratory at the National Institute of Technical Teacher’s Training and Research, Chandigarh, India. The visible components of the prototype robot have been labeled and shown in Fig. 4. Some other components such as DC motors and motor drivers are placed at the bottom side and are not visible to make the prototype as compact as possible. Arduino Uno has been used to establish an interface between Simulink and programmed with the Arduino Software (IDE) to control the prototype robot. Two geared DC motors which work between 3 and 12 V DC have been used along with two 65-mm high-quality rubber wheels. The speed of rotation and direction is controlled by using the PWM pin of the Arduino Uno controller, and metal gears analog servo motors are used for gripper control. Other components used in the robot are an L298N DC motor driver, two lithium-ion chargeable batteries for power supply, a gripper arm, robot chassis, connecting wires, a USB connector to connect with the CPU, a LED to indicate that an object has been identified, and a general purpose electromagnetic piezo buzzer to indicate emergency call by the patient. A switch is used to “ON” the robot and to connect it to the computer. A lightweight gripper arm is attached to the end effector of the robot. The MFRC522 RFID reader is placed near the end effector and reads information stored in the tag attached to the object. Based on the information gathered, it either picks the object and places it at a predefined zone or gives a warning alarm to indicate the wrong object.

Fig. 4
figure 4

The final prototype robot with a gripper arm

Application description

In the experiment, a realistic application has been designed in which the developed prototype robot has to reach an object (in this case, an empty glass) following a predefined path, identify it, and perform a pick/place operation. The glass is placed in zone 2 and needs to be placed in zone 1. The movement of the prototype robot and the assistive application for performing pick/place operations was controlled by the proposed multimodal-shared control interface model. It has two control modes: multimodal control mode is executed in the Simulink environment and shared control mode is executed in distant surroundings through RFID and Arduino controller.

The complete process for the application task is shown in Fig. 5. Before starting the application task, a short testing was performed to check if the selected thresholds for EOG movements need to be updated. Then, the final testing for the designed application task was performed with the prototype robot and object as described as follows:

  1. 1.

    Electrodes have been placed at appropriate positions and proper skin contact was achieved with the help of conducting gel.

  2. 2.

    The robot is switched on to connect it to the PC and place it in the initial position.

  3. 3.

    The DC motors of the prototype robot get the multimodal control commands (from EOG signals of the user by moving eyeballs in the predefined horizontal directions (left, right) and EEG signals of the user by focusing on four flicking LEDs) through Arduino and move accordingly in different directions. Following a predefined path, it reaches the proximity of the object in zone 2.

  4. 4.

    The RFID reader will read the tag information of the object placed at zone 2. When the object with a prewritten tag ID comes in the range of RFID reader, an LED glows to indicate object identification.

  5. 5.

    If the tag is identified, a signal is sent to the servo motors controlling the gripper arm to pick the object from zone 2 and place the object in zone 1. The gripper arm is programmed in Arduino to perform pick and place tasks automatically after object identification.

  6. 6.

    If the tag is not identified, a buzzer will ring to give audio feedback to the user for the same.

Fig. 5
figure 5

Complete process for application task

To find a suitable setup for continuous control of a robot for mobility assistance applications, the schematic diagram of multimodal and shared control interface systems has been shown in Fig. 6.

Fig. 6
figure 6

Multimodal interface and shared control architecture to control prototype robot with a gripper arm

Results and discussion

To validate and test the proposed model, the assistive application has been performed with a developed prototype robot. The complete process for the application task has been explained in the previous section. Each session takes less than an hour to complete the application tests, including the EEG interface and EOG interface validation. Validated before the final tests, the EOG interface showed a 100% success rate for all users, and the EEG interface showed a 90% success rate. The remaining 10% corresponds to an error or false detection.

Performance of EOG-EEG-based multimodal interface model

To evaluate the performance of the EEG-EOG-based multimodal interface model, classification accuracy and ITR have been calculated. Table 1 shows the performance of the proposed multimodal interface in terms of classification accuracy and ITR for all the subjects. It also indicates the mean and standard deviation of classification accuracy and ITR of the multimodal system which was 97.07 ± 2.49% and 60.45 ± 5.21 bits/min, respectively.

Table 1 Detection time, classification accuracy and ITR of EEG-EOG-based multimodal interface

The results of the proposed EEG + EOG multimodal system have also been compared with previous research works. Table 2 gives a summary of previous literature on EEG + EOG MMI systems as compared to the present work. The experiment results and performance evaluating parameter classification accuracy and ITR suggest that the proposed multimodal interface is promising for MMI-related applications as compared to any single modality EEG interface or EOG interface and other previous works on MMI as well.

Table 2 Summary of the previous work on EEG + EOG MMI systems

Performance of EOG-SSVEP-RFID-based robot control architecture

In the present work, the classification accuracy and information transfer rate (ITR) have been chosen as performance-evaluating parameters and calculated for the EOG-EEG-RFID multimodal-shared control interface. The classification accuracy and ITRs are commonly used as evaluating measurements for BCIs. Task speed time has also been calculated for evaluating the multimodal-shared control model [17]. The classification accuracy is defined as the ratio of true classes vs total classes task speed is the total time to complete the application task. ITR denotes the total information transferred per unit time [bits/min] and can be calculated by Wolpaw et al. [18].

Final verification was done by controlling a mobile robot to perform an assistive application as explained in Sect. 3.1. The average (mean) time taken to complete the pick/place action has been measured for each user and shown in Table 3. The completion time averaged for all users was calculated around 28 s which is higher than 90% of the minimum time (27 s) required to complete the task.

Table 3 Performance of EOG-EEG-RFID-based multimodal-shared control model

Table 3 gives quite significant information in several ways. All subjects were able to perform the application task. Some subjects had previous experience in EOG and EEG interface systems (S5, S6, and S7). In that case, the total time is slightly less than the average time. This means that the performance of the proposed system can be improved with practice.

Table 4 shows that the total control commands have been increased from 5 (multimodal Interface) to 12 by using the proposed multimodal-shared control interface model and without giving extra physical or mental effort by the user. Less physical and mental demand by the proposed model enhances the usability of the system.

Table 4 Control signals and corresponding control commands by the multimodal-shared control model

Comparative study

A comparative study has been presented for the proposed EEG-EOG-RFID-based multimodal and shared control interface. The performance of the proposed model is compared with the EEG-EOG-based multimodal interface and presented in Table 5. To simplify the comparison of results between both systems, averaged performances of the two systems were calculated. Average classification accuracy, average ITR, number of control commands, and fatigue on user have been taken as performance parameters for comparing the proposed EEG-EOG-RFID-based multimodal-shared control model with the EEG-EOG-based multimodal interface model. It can be observed and concluded from Table 5 that the average ITR of EEG-EOG-based multimodal interface system has a significant increase from 60.45 ± 5.21 to 198.81 ± 8.29 bits/min by combining RFID as a shared control technology. Apart from that, the average classification accuracy and the number of control commands also improved from 97.07 ± 2.49 to 98.53 ± 1.24% and from 5 to 12 commands, respectively. The promising results of the proposed multimodal-shared control system shown in the table justify that combining the shared control approach with a multimodal interface improves the performance of the MMI system. The gripper arm is programmed in Arduino to complete application tasks automatically after object identification without any moment-by-moment commands from users, resulting in less physical and mental demand. The shared control approach enables to complete the complex tasks with fewer commands from the user.

Table 5 Comparison of proposed multimodal and shared control interface with multimodal interface

Classification accuracy and ITR both are commonly used standard methods for usability measurement studies. Accuracy is used to assess the effectiveness, and ITR is used to assess efficiency. A high ITR provides a large selection of options and is important to provide fast response to the user to give them a precise and crisp feel of control. The moment our brain gives instructions, our senses start responding naturally at the same moment without any delay. A system with high accuracy and high ITR can also provide the same feeling to the user. Therefore, in order to make the BCI system practical in reality, a high ITR and high accuracy are essential.

Table 6 summarizes the research works related to the shared control approach and their findings. Very few research works have been found where shared control technology is incorporated along with BCI systems. The proposed multimodal shared control model has shown improved performance than previous literature, and this may be a countable contribution by investigators in the field of research.

Table 6 Summary of the previous literature on shared control-based MMI systems

System usability

Measurement of system usability covers three separate components’ effectiveness, efficiency, and user satisfaction. The System Usability Scale (SUS) is a standard, reliable, and universally accepted tool to measure system usability and user satisfaction [23]. SUS is a simple ten-questionnaire scale to assess the system usability which is used in the present work to evaluate the system usability and user satisfaction, and SUS scores for all subjects have been shown in Table 7.

Table 7 SUS score for all subjects

Most of the users strongly agreed that the system was easy to use and did not require lots of background knowledge before it got going. More than average users strongly agreed that various functions in the system were well integrated, and most people would learn to use this system very quickly. On the other side, the lowest score was achieved for using the system without any technical person. Some users were not confident about placing the electrodes correctly on their own. Seven out of ten questions scored more than average and consequently, the overall user experience and satisfaction level were concluded above average. The average SUS score obtained for all subjects was 79.4 which is more than 68 and comes in the acceptable range as per the preset standards in the SUS tool.

Conclusions

In the present work, the investigator developed a robotic prototype whose motion controlling is based on the two bio-medical signals (EOG and EEG) acquired from healthy human subjects. Then, a real-time application for picking/placing an object is based on the combination of RFID-based shared control architecture. RFID is used to identify and interact with the object to be picked.

In further studies, a portable standalone system can be developed by using wireless electrodes and wireless communication in the present work. The model was tested with healthy subjects only and needs to be tested with real potential users having motor disabilities to make it more usable as an assistive system.

Availability of data and materials

The datasets generated and analyzed during the current study are not publicly available as the authors do not have consent from all participants to publish their data, but are available from the corresponding author on reasonable request.

Abbreviations

RFID:

Radio frequency identification

EOG:

Electrooculography

EEG:

Electroencephalography

SSVEP:

Steady-state visual-evoked potential

CA:

Classification accuracy

ITR:

Information transfer rate

MMI:

Man-machine interface

EMG:

Electromyography

PC:

Personal computer

USB:

Universal serial bus

BCI:

Brain-computer interface

LED:

Light-emitting diode

HEOG:

Horizontal electrooculography

LDA:

Linear discriminant analysis

MATLAB:

Matrix laboratory

CPU:

Central processing unit

PWM:

Pulse width modulation

IDE:

Integrated development environment

DC:

Direct current

ID:

Identity document

References

  1.  Amiri S, Rabbi A, Azinfar L, Fazel-Rezai R (2012) A review of P300, SSVEP, and hybrid P300/SSVEP brain-computer interface systems. In. Brain-Computer Interface Syst. Recent Prog Futur Prospect. pp. 195–213

  2. Edlinger G, Guger C (2011) A hybrid brain-computer interface for smart home control, in: Int. Conf. Human-Computer Interact. Interact. Tech. Environ. HCI 2011. Lect. Notes Comput. Sci., Springer Berlin Heidelberg. pp. 417–426. https://doi.org/10.1007/978-3-642-21605-3 

  3. Chen C, Zhou P, Belkacem AN, Lu L, Xu R, Wang X, Tan W, Qiao Z, Li P, Gao Q, Shin D (2020) Quadcopter robot control based on hybrid brain – computer interface system. Sensors Mater 32:991–1004

    Article  Google Scholar 

  4. Iáñez E, Úbeda A, Azorín JM, Perez-vidal C (2012) Assistive robot application based on an RFID control architecture and a wireless EOG interface. Rob Auton Syst 60:1069–1077. https://doi.org/10.1016/j.robot.2012.05.006

    Article  Google Scholar 

  5. Kumari P, Mathew L, Syal P (2017) Increasing trend of wearables and multimodal interface for human activity monitoring: a review. Biosens Bioelectron 90:298–307. https://doi.org/10.1016/j.bios.2016.12.001

    Article  Google Scholar 

  6. Chen X, Zhao B, Wang Y, Gao X (2019) Combination of high-frequency SSVEP-based BCI and computer vision for controlling a robotic arm. J Neural Eng 16(2):30523962

    Article  Google Scholar 

  7. Cao L, Li G, Xu Y, Zhang H, Shu X, Zhang D (2021) A brain-actuated robotic arm system using non-invasive hybrid brain-computer interface and shared control strategy. J Neural Eng 18(4):1–2

    Article  Google Scholar 

  8. Zhang Y, Jin J, Qing X, Wang B, Wang X (2012) LASSO based stimulus frequency recognition model for SSVEP BCIs, Biomed. Signal Process. Control 7:104–111. https://doi.org/10.1016/j.bspc.2011.02.002

    Article  Google Scholar 

  9. Friman O, Volosyak I, Graser A (2007) Multiple channel detection of steady-state visual evoked potentials for brain-computer interfaces. IEEE Trans Biomed Eng 54:742–750

    Article  Google Scholar 

  10. Kim CH, Choi B, Kim DG, Lee S, Jo S, Lee PS (2016) Remote navigation of turtle by controlling instinct behavior via human brain–computer interface. J Bionic Eng 13:491–503

    Article  Google Scholar 

  11. Ma JX, Zhang Y, Cichocki A, Matsuno F (2015) A novel EOG/EEG hybrid human-machine interface adopting eye movements and ERPs: application to robot control. IEEE Trans Biomed Eng 62:876–889

    Article  Google Scholar 

  12. Puanhvuan D, Khemmachotikun S, Wechakarn P, Wijarn B, Wongsawat Y (2017) Navigation-synchronized multimodal control wheelchair from brain to alternative assistive technologies for persons with severe disabilities. Cogn Neurodyn 11(2):117–134

    Article  Google Scholar 

  13. Wang H, Li Y, Long J, Yu T, Gu Z (2014) An asynchronous wheelchair control by hybrid EEG–EOG brain–computer interface. Cogn Neurodyn 8(5):399–409

    Article  Google Scholar 

  14. Postelnicu C-C, Talaba D (2013) P300-based brain-neuronal computer interaction for spelling applications. IEEE Trans Biomed Eng 60(2):534–543

    Article  Google Scholar 

  15. Koo B, Nam Y,  Choi S (2014) A hybrid EOG-P300 BCI with dual monitors, in Proc. Int. Winter Workshop Brain-Comput. Interface (BCI). pp. 1–4

  16. Lee MH, Williamson J, Won D-O, Fazli S, Lee S-W (2018) A high performance spelling system based on EEG-EOG signals with visual feedback. IEEE Trans Neural Syst Rehabil Eng 26(7):1443–1459

    Article  Google Scholar 

  17. Shuaieb W, Oguntala G, AlAbdullah A, Obeidat H, Asif R, Abd-Alhameed RA, Bin-Melha MS, Kara-Zaïtri C (2020) RFID RSS fingerprinting system for wearable human activity recognition. Futur Internet 12:1–12

    Google Scholar 

  18. Wolpaw JR, Ramoser H, McFarland DJ, Pfurtscheller G (1998) EEG-Based communication: improved accuracy by response verification. IEEE Trans Rehab Engg 6(3):326–333

    Article  Google Scholar 

  19. Tang J, Zhou Z (2017) A shared-control based BCI system: for a robotic arm control. In. 1st International Conference on Electronics Instrumentation and Information Systems, EIIS 2017. pp. 1–5

  20. Úbeda A, Iáñez E, Azorín JM (2013) Shared control architecture based on RFID to control a robot arm using a spontaneous brain–machine interface. Rob Auton Syst 61(8):768–774

    Article  Google Scholar 

  21. Li T, Hong J, Zhang J, Guo F (2014) Brain–machine interface control of a manipulator using small-world neural network and shared control strategy. J Neurosci Methods 224:26–38

    Article  Google Scholar 

  22. Zhang S, Gao X, Chen X (2022) Humanoid robot walking in maze controlled by SSVEP-BCI based on augmented reality stimulus. Front Hum Neurosci 16:1–9

    Article  Google Scholar 

  23. Brooke J (2013) SUS: a retrospective. J Usability Stud 8(2):29–40

    Google Scholar 

Download references

Acknowledgements

The authors acknowledge all administrative and technical support given by the National Institute of Technical Teachers Training and Research, Chandigarh. All the experimentation procedure was conducted in the electrical laboratory of the institute.

Funding

This research received no external funding.

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization, P.K.; methodology, P.K.; writing—original draft preparation, P.K.; writing—review and editing, P.K., L.M., and N.K.; and supervision, L.M. and N.K. All authors have read and agreed to the published version of the manuscript.

Corresponding author

Correspondence to Preeti Kumari.

Ethics declarations

Ethics approval and consent to participate

The research work was approved by the “Institutional Ethics Committee, Panjab University Chandigarh (PUIEC)” with letter number PUIEC/2017/68/A/06/02, dated March 24, 2017. The approval letter has been attached in the supporting file named as Ethical Clearance Certificate.jpg. All subjects declared the absence of neurological or mental illnesses. The nature and purpose of the study and expected duration of the study, other relevant details of the study, and the complete experimental procedure were explained to all the subjects, and consent was obtained before experimenting. A copy of the informed consent form used for participants is attached in supporting files to assure readers that the study was conducted in an ethically sound manner. Their participation was voluntary and that they were free to withdraw at any time, without giving any reason and without their medical care or legal rights being affected.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kumari, P., Mathew, L. & Kumar, N. Combining biosignals with RFID to develop a multimodal-shared control interface. J. Eng. Appl. Sci. 70, 119 (2023). https://doi.org/10.1186/s44147-023-00291-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s44147-023-00291-9

Keywords