ランサムウェアの迅速な検知を可能にする新たなアプローチ(New Approach Allows for Faster Ransomware Detection)

ad

2022-05-16 ノースカロライナ州立大学(NCState)

computer screen showing a pirate flag

Photo credit: Michael Geiger

工学研究者は、ランサムウェア検出技術を実装するための新しいアプローチを開発し、従来のシステムよりもはるかに迅速に広範なランサムウェアを検出することができるようになりました。
ランサムウェアは、マルウェアの一種です。ランサムウェアは、システムに侵入すると、そのシステムのデータを暗号化し、ユーザーがデータにアクセスできないようにします。そして、ランサムウェアの犯人は、感染したシステムのオペレーターに対して、自分たちのデータへのアクセスを許可する代わりに金銭を要求するのです。

<関連情報>

FAXID: HLSを用いたデータセンター向けFPGA高速化XGBoost推論システム FAXID: FPGA-Accelerated XGBoost Inference for Data Centers using HLS

Archit Gajjar, Priyank Kashyap, Aydin Aysu and Paul Franzon, North Carolina State University; and Sumon Dey and Chris Cheng, Hewlett Packard Enterprise
30th IEEE International Symposium on Field-Programmable Custom Computing Machines (FCCM)    Presented: May 15-1830th

Abstract

Advanced ensemble trees have proven quite effective in providing real-time predictions against ransomware detection, medical diagnosis, recommendation engines, fraud detection, failure predictions, crime risk, to name a few. Especially, XGBoost, one of the most prominent and widely used decision trees, has gained popularity due to various optimizations on gradient boosting framework that provides increased accuracy for classification and regression problems. XGBoost’s ability to train relatively faster, handling missing values, flexibility and parallel processing make it a better candidate to handle data center workload. Today’s data centers with enormous Input/Output Operations per Second (IOPS) demand a real-time accelerated inference with low latency and high throughput because of significant data processing due to applications such as ransomware detection or fraud detection. This paper showcases an FPGA-based XGBoost accelerator designed with High-Level Synthesis (HLS) tools and design flow accelerating binary classification inference.We employ Alveo U50 and U200 to demonstrate the performance of the proposed design and compare it with existing state-of-the-art CPU (Intel Xeon E5-2686 v4) and GPU (Nvidia Tensor Core T4) implementations with relevant datasets. We show a latency speedup of our proposed design over state-of-art CPU and GPU implementations, including energy efficiency and cost-effectiveness. The proposed accelerator is up to 65.8x and 5.3x faster, in terms of latency than CPU and GPU, respectively. The Alveo U50 is a more cost-effective device, and the Alveo U200 stands out as more energy-efficient.

ad

1604情報ネットワーク
ad
ad
Follow
ad
タイトルとURLをコピーしました