Dissertation/ Thesis
Adversarial attacks and defense for arrhythmia classification
العنوان: | Adversarial attacks and defense for arrhythmia classification |
---|---|
المؤلفون: | Jayhne, Mukkul I |
المساهمون: | Wang, Xuyu, Ouyang, Jinsong |
المصدر: | oai:alma.01CALS_USL:11242312150001671 |
بيانات النشر: | California State University, Sacramento Computer Science Department |
سنة النشر: | 2021 |
مصطلحات موضوعية: | ECG Classification, Adversarial Attacks, Adversarial Defence |
الوصف: | The electrocardiogram (ECG) has been around the medical industry since the start of development of heart monitoring techniques. It has improved heart-based monitoring and paved the way for further accurate monitoring techniques. It has played a pivotal role in accommodating latest technology practices to improve the accuracy of monitoring heart-based conditions. It can be used to detect conditions like arrhythmia, heart attacks, coronary heart diseases and cardiomyopathy. The wide usage of ECG has attracted special attention to incorporate modern automated interpretation strategies. New machine learning algorithms and artificial intelligence techniques play a pivotal role in the detection of abnormalities. They also contribute to identifying irregularities pertaining to the rhythms of the human heart. However, these techniques bring forth a huge vulnerability to adversarial attacks. These attacks on the classification of heart beats could be used to misguide the system into predicting and deducing incorrect results. This susceptibility to adversarial attacks could have fatal consequences. This raises concerns with regards to adoption of modern techniques that were initially in place to reduce human intervention.The risk of misclassification is mitigated through adversarial attacks on such an arrhythmia classification system. A defense system is also designed against such attacks to exhibit the importance of robust models to help against the susceptibility of normal models to adversarial attacks. The attacks in place reduce the model's performance by introducing perturbations to the model and dataset. The defense trains the model against these perturbations to reduce the effectiveness of these adversarial attacks. |
نوع الوثيقة: | thesis |
اللغة: | English |
Relation: | http://hdl.handle.net/20.500.12741/rep:1955 |
الاتاحة: | https://hdl.handle.net/20.500.12741/rep:1955 |
رقم الانضمام: | edsbas.85817555 |
قاعدة البيانات: | BASE |
الوصف غير متاح. |