Match!
Faiq Khalid
Vienna University of Technology
Deep learningMachine learningComputer scienceReal-time computingRobustness (computer science)
42Publications
7H-index
165Citations
What is this?
Publications 41
Newest
#1Faiq Khalid (TU Wien: Vienna University of Technology)H-Index: 7
#2Syed Rafay Hasan (Tennessee Technological University)H-Index: 9
Last. Muhammad Shafique (TU Wien: Vienna University of Technology)H-Index: 29
view all 4 authors...
Abstract Timely detection of Hardware Trojans (HTs) has become a major challenge for secure integrated circuits. We present a run-time methodology for HT detection that employs a multi-parameter statistical traffic modeling of the communication channel in a given System-on-Chip (SoC), named as SIMCom. The main idea is to model the communication using multiple side-channel information like the Hurst exponent, the standard deviation of the injection distribution, and the hop distribution jointly t...
Source
#1Faiq Khalid (TU Wien: Vienna University of Technology)H-Index: 7
#2Syed Rafay Hasan (Tennessee Technological University)H-Index: 9
Last. Muhammad Shafique (TU Wien: Vienna University of Technology)H-Index: 29
view all 4 authors...
Timely detection of Hardware Trojans (HT) has become a major challenge for secure integrated circuits. We present a run-time methodology for HT detection that employs a multi-parameter statistical traffic modeling of the communication channel in a given System-on-Chip (SoC). Towards this, it leverages the Hurst exponent, the standard deviation of the injection distribution and hop distribution jointly to accurately identify HT-based online anomalies. At design time, our methodology employs a pro...
Source
#1Syed Ali Asadullah Bukhari (National University of Sciences and Technology)H-Index: 2
#2Faiq Khalid (TU Wien: Vienna University of Technology)H-Index: 7
Last. Jorg Henkel (KIT: Karlsruhe Institute of Technology)H-Index: 44
view all 5 authors...
Dynamic thermal management (DTM) techniques are being widely used for attenuation of thermal hot spots in many-core systems. Conventionally, DTM techniques are analyzed using simulation and emulation methods, which are in-exhaustive due to their inherent limitations and cannot provide for a comprehensive comparison between DTM techniques owing to the wide range of corresponding design parameters. In order to handle the above discrepancies, we propose to use model checking, a state-space based fo...
Source
#1Hassan AliH-Index: 2
#2Faiq KhalidH-Index: 7
Last. Muhammad ShafiqueH-Index: 29
view all 7 authors...
In this paper, we introduce a novel technique based on the Secure Selective Convolutional (SSC) techniques in the training loop that increases the robustness of a given DNN by allowing it to learn the data distribution based on the important edges in the input image. We validate our technique on Convolutional DNNs against the state-of-the-art attacks from the open-source Cleverhans library using the MNIST, the CIFAR-10, and the CIFAR-100 datasets. Our experimental results show that the attack su...
#1Hassan Ali (National University of Sciences and Technology)H-Index: 2
#2Faiq Khalid (TU Wien: Vienna University of Technology)H-Index: 7
Last. Semeen Rehman (TU Wien: Vienna University of Technology)H-Index: 17
view all 6 authors...
Training data is crucial in ensuring robust neural inference, and deep neural networks (DNNs) are heavily dependent on this assumption. However, DNNs can be exploited by adversaries that facilitate various attacks. Adversarial defenses include several techniques, some of which happen during the preprocessing stages (i.e., noise filtering, etc.). This article analyzes the impact of some preprocessing filters, and proposes a selective preprocessing method which increases robustness and reduces the...
Source
#1Mahum Naseer (TU Wien: Vienna University of Technology)H-Index: 1
#2Mishal Fatima Minhas (National University of Sciences and Technology)H-Index: 1
Last. Muhammad Shafique (TU Wien: Vienna University of Technology)H-Index: 29
view all 6 authors...
With a constant improvement in the network architectures and training methodologies, Neural Networks (NNs) are increasingly being deployed in real-world Machine Learning systems. However, despite their impressive performance on "known inputs", these NNs can fail absurdly on the "unseen inputs", especially if these real-time inputs deviate from the training dataset distributions, or contain certain types of input noise. This indicates the low noise tolerance of NNs, which is a major reason for th...
1 Citations
#1Muluken Hailesellasie (Tennessee Technological University)H-Index: 2
#2Jacob Nelson (Tennessee Technological University)H-Index: 1
Last. Syed Rafay Hasan (Tennessee Technological University)H-Index: 9
view all 4 authors...
The advancement in deep learning has taken the technology world by storm in the last decade. Although, there is enormous progress made in terms of algorithm performance, the security aspect of these algorithms has not received a lot of attention from the research community. As more industries start to adopt these algorithms the issue of security is becoming even more relevant. Security vulnerabilities in machine learning (ML), especially in deep neural networks (DNN), is becoming a concern. Vari...
1 CitationsSource
#1Alberto Marchisio (TU Wien: Vienna University of Technology)H-Index: 4
#2Muhammad Abdullah Hanif (TU Wien: Vienna University of Technology)H-Index: 8
Last. Muhammad Shafique (TU Wien: Vienna University of Technology)H-Index: 29
view all 7 authors...
In the Machine Learning era, Deep Neural Networks (DNNs) have taken the spotlight, due to their unmatchable performance in several applications, such as image processing, computer vision, and natural language processing. However, as DNNs grow in their complexity, their associated energy consumption becomes a challenging problem. Such challenge heightens for edge computing, where the computing devices are resource-constrained while operating on limited energy budget. Therefore, specialized optimi...
6 CitationsSource
#1Faiq Khalid (TU Wien: Vienna University of Technology)H-Index: 7
#2Hassan Ali (National University of Sciences and Technology)H-Index: 2
Last. Muhammad Shafique (TU Wien: Vienna University of Technology)H-Index: 29
view all 7 authors...
Adversarial examples have emerged as a significant threat to machine learning algorithms, especially to the convolutional neural networks (CNNs). In this paper, we propose two quantization-based defense mechanisms, Constant Quantization (CQ) and Trainable Quantization (TQ), to increase the robustness of CNNs against adversarial examples. CQ quantizes input pixel intensities based on a “fixed” number of quantization levels, while in TQ, the quantization levels are “iteratively learned during the ...
2 CitationsSource
#1Faiq Khalid (TU Wien: Vienna University of Technology)H-Index: 7
#2Muhammad Abdullah Hanif (TU Wien: Vienna University of Technology)H-Index: 3
Last. Muhammad Shafique (TU Wien: Vienna University of Technology)H-Index: 29
view all 5 authors...
Most of the data manipulation attacks on deep neural networks (DNNs) during the training stage introduce a perceptible noise that can be catered by preprocessing during inference, or can be identified during the validation phase. There-fore, data poisoning attacks during inference (e.g., adversarial attacks) are becoming more popular. However, many of them do not consider the imperceptibility factor in their optimization algorithms, and can be detected by correlation and structural similarity an...
5 CitationsSource
12345