forked from leozhangcs/simlaworkshop.github.io
-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathindex.html
More file actions
125 lines (106 loc) · 5.6 KB
/
index.html
File metadata and controls
125 lines (106 loc) · 5.6 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
<!DOCTYPE HTML>
<html>
<head>
<title>SiMLA2026 Workshop</title>
<meta name="description" content="Homepage of SiMLA workshop" />
<meta name="keyword" content="Security in Machine Learning and its Applications">
<link rel="stylesheet" type="text/css" href="css/style.css">
<style>
.rainbow-text {
margin-left: 10px;
font-weight: bold;
font-size: 16px;
background: linear-gradient(
90deg,
#ff0000,
#ff7f00,
#ffff00,
#00ff00,
#0000ff,
#4b0082,
#8f00ff,
#ff0000
);
background-size: 400%;
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
animation: rainbowMove 4s linear infinite;
}
@keyframes rainbowMove {
0% { background-position: 0% }
100% { background-position: 400% }
}
</style>
</head>
<body>
<div class="container">
<header id="main_header">
<h1>8th International Workshop on Security in Machine Learning and its Applications (SiMLA)</h1>
<h2><a href="https://acns2026.github.io/" target="_blank">SiMLA2026 in conjunction with ACNS2026 (June 22nd-25th 2026), Stony Brook, New York, USA </a></h2>
</header>
<nav id="navbar">
<ul>
<li> <a href="#">Home</a></li>
<!-- ><li> <a href="contents/program.html">Program</a></li> -->
<li> <a href="contents/cfp.html">Call for Papers</a></li>
<li> <a href="contents/author.html">Author Instructions</a></li>
<li> <a href="contents/committee.html">Committee</a></li>
<!-- <li> <a href="contents/keynote.html">Keynote</a></li> -->
<!-- <li> <a href="http://acns2025.fordaysec.de/registration/">Registration</a></li> -->
<li> <a href="contents/past_events.html">Past Events</a></li>
</ul>
</nav>
<div id="main_pic">
<!-- the picture should be updated to Stony Brook -->
<img src="images/StonyBrook.jpg" alt="StonyBrook" width="800" />
</div>
<!--
<h3>News</h3>
<p><em>13/05/2025 The program has been updated <a href="https://cimssworkshop.github.io/contents/program.html">Here</a></em></p>
TBA
<div class="highlight">
<h3><a href=""> Click here for a CFP flyer. Please feel free to print and distribute it! </a></h3>
</div>
-->
<h3>Important Dates</h3>
<ul>
<li> Workshop date: June 22-25, 2026 (in parallel with the ACNS main conference)</li>
<li>
Workshop Paper Submission Deadline: March 20, 2026, 23:59 (Anywhere on Earth)
<span class="rainbow-text"> Submission is Open</span>
</li>
<li> Notification of Acceptance/Rejection: April 20, 2026</li>
<li> Submission of camera-ready papers: May 1, 2026</li>
</ul>
<h3>Workshop Description</h3>
<p align="justify">With the rapid advancement of computing hardware, learning algorithms, and the explosive growth of data, machine learning (ML) technologies have been widely adopted across diverse domains such as facial recognition, intelligent video surveillance, autonomous driving, and beyond. Despite their remarkable success, the security and privacy implications of ML systems remain insufficiently understood. In particular, adversarial machine learning continues to expose unique vulnerabilities in models, yet there is still a lack of systematic methods to assess and improve their robustness. At the same time, growing public awareness of data privacy raises critical concerns about how to train and deploy ML models without compromising sensitive information.</p>
<p align="justify">In addition, the rapid progress toward increasingly Artificial General Intelligence (AGI) systems (e.g., foundation models and AI agents) introduces new risks that go beyond conventional ML settings. These include issues of content provenance, detecting and mitigating mis/disinformation, ensuring trustworthy deployment at scale, and safeguarding both AGI and AI agents against misuse or adversarial manipulation.</p>
<p align="justify">This workshop aims to bring together researchers and practitioners to explore these pressing challenges. We solicit high-quality contributions addressing a wide range of topics, including but not limited to adversarial learning, robust algorithm design and evaluation, privacy-preserving machine learning techniques, and secure ML system deployment. The workshop will provide a forum for participants to exchange cutting-edge ideas, present novel solutions, and discuss emerging trends that bridge theoretical advances with real-world applications.</p>
<!--
<h3>Contact: </h3>
<ul>
<li>Leo Zhang, leo.zhang@griffith.edu.au, Griffith University, Australia</li>
<li>Yifeng Zheng, yifeng.zheng@polyu.edu.hk, The Hong Kong Polytechnic University, Hong Kong</li>
<li>Fuyi Wang, fuyi.wang@rmit.edu.au, RMIT University, Australia</li>
</ul>
<h3>Contact:</h3>
<ul>
<li><strong>Workshop Chair:</strong> Leo Zhang, leo.zhang@griffith.edu.au, Griffith University, Australia</li>
<li><strong>Workshop Chair:</strong> Yifeng Zheng, yifeng.zheng@polyu.edu.hk, The Hong Kong Polytechnic University, Hong Kong</li>
<li><strong>Web Chair:</strong> Fuyi Wang, fuyi.wang@rmit.edu.au, RMIT University, Australia</li>
</ul>
-->
<p> <b>There will be ACNS best workshop paper award with a prize of EUR500 sponsored by Springer. </b></p>
<br>
<footer id="main_footer">
<img src="images/acns-logo_L.jpg" alt="ACNS Logo" width="150" />
<img src="images/LNCS-Logo.jpg" alt="LNCS Logo" width="370" />
<!--
<img src="images/griffith.png" alt="LNCS Logo" width="120" />
<img src="images/hkploy.png" alt="LNCS Logo" width="120" />
<img src="images/RMIT.png" alt="LNCS Logo" width="200" />
-->
</footer>
</div>
</body>
</html>