-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.html
1356 lines (1175 loc) · 105 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<meta name="description"
content="SSNLP 2024">
<meta name="author" content="">
<meta name="keywords"
content="SSNLP 2024, Singapore Symposium">
<title>SSNLP 2024: The 2024 Singapore Symposium on Natural Language Processing
</title>
<!-- Bootstrap core CSS -->
<link href="vendor/bootstrap/css/bootstrap.min.css" rel="stylesheet">
<!-- Custom styles for this template -->
<link href="css/scrolling-nav.css" rel="stylesheet">
<link rel="shortcut icon" type="image/x-icon" href="favicon.png">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0/css/font-awesome.min.css">
<link rel="stylesheet" type="text/css" href="fonts/font-awesome-4.7.0/css/font-awesome.min.css">
<!--===============================================================================================-->
<link rel="stylesheet" type="text/css" href="vendor/animate/animate.css">
<!--===============================================================================================-->
<link rel="stylesheet" type="text/css" href="vendor/select2/select2.min.css">
<!--===============================================================================================-->
<link rel="stylesheet" type="text/css" href="vendor/perfect-scrollbar/perfect-scrollbar.css">
<!--===============================================================================================-->
<link rel="stylesheet" type="text/css" href="css/util.css">
<link rel="stylesheet" type="text/css" href="css/main.css">
<style type="text/css">
.navbar-text > a {
color: inherit;
text-decoration: none;
}
.white_bg {
background-color: #eef7fa;
padding: 3px;
}
.line2 {
margin: 5px 0;
height: 2px;
background: repeating-linear-gradient(to right, black 0, black 10px, transparent 10px, transparent 12px)
/*10px red then 2px transparent -> repeat this!*/
}
.bordered, .hover2, xximg:hover {
border-color: #AAAAAA;
border-style: solid;
border-width: 1px;
border-collapse: separate /* otherwise does not work in IE inside tables */;
}
.hover2 {
-webkit-box-shadow: 2px 2px 2px rgba(, , 120, 0.6);
-moz-box-shadow: 2px 2px 2px rgba(, , 120, 0.6);
-o-box-shadow: 2px 2px 2px rgba(, , 120, 0.6);
box-shadow: 0px 0px 10px rgba(, , 120, 0.6);
}
figure figcaption {
text-align: center;
margin: 10px;
}
figure {
display: inline-block;
margin: 0px;
}
figure img {
vertical-align: top;
border: 1px solid #ddd;
border-radius: 0px;
padding: 0px;
}
figure img:hover {
opacity: 0.7;
filter: alpha(opacity=70);
-webkit-box-shadow: -2px 4px 10px 0px rgba(0, 0, 0, 1);
-moz-box-shadow: -2px 4px 10px 0px rgba(0, 0, 0, 1);
box-shadow: -2px 4px 10px 0px rgba(0, 0, 0, 1);
-webkit-transition: all .2s ease-in-out;
transition: all .2s ease-in-out;
}
</style>
</head>
<body id="page-top">
<!-- Navigation -->
<nav class="navbar navbar-expand-lg navbar-dark bg-dark fixed-top" id="mainNav">
<div class="container">
<div class="navbar-header">
<a class="navbar-brand js-scroll-trigger" href="#">
SSNLP 2024
</a>
</div>
<button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbarResponsive"
aria-controls="navbarResponsive" aria-expanded="false" aria-label="Toggle navigation">
<span class="navbar-toggler-icon"></span>
</button>
<div class="collapse navbar-collapse" id="navbarResponsive">
<ul class="navbar-nav ml-auto">
<li class="nav-item">
<a class="nav-link js-scroll-trigger" href="#overview">Overview</a>
</li>
<li class="nav-item">
<a class="nav-link js-scroll-trigger" href="#programme">Programme</a>
</li>
<!-- Speakers -->
<li class="nav-item">
<a class="nav-link js-scroll-trigger" href="#organizers">Organizers</a>
</li>
<li class="nav-item">
<a class="nav-link js-scroll-trigger" href="#partners">Partners</a>
</li>
<!-- Partners, Photos -->
<li class="nav-item">
<a class="nav-link js-scroll-trigger" href="#location">Location</a>
</li>
<li class="nav-item">
<a class="nav-link js-scroll-trigger" href="#past-ssnlp">Past SSNLP</a>
</li>
<li class="nav-item">
<a class="nav-link js-scroll-trigger" href="#contact">Contact</a>
</li>
</ul>
</div>
</div>
</nav>
<header class="bg-primary text-white">
<div class="container text-center">
<h1 style="font-size: 60px;font-weight: bold;color: #e48a52;">SSNLP 2024</h1>
<h2 style="font-size: 35px;color: #ffffff;background-color: rgba(0, 123, 255, .25);">The 2024 Singapore Symposium on Natural Language Processing</h2>
</div>
</header>
<section id="overview" class="bg-light">
<div class="container">
<div class="row">
<div class="col-lg-8 mx-auto">
<h2>Welcome!</h2>
<p style="text-align: justify;">We are excited to announce that the <b>Singapore Symposium on Natural Language Processing (SSNLP 2024)</b> will take place on <b>Wednesday, November 6</b>, as a full-day event.
SSNLP, an annual pre-conference workshop, gathers the Natural Language Processing community in Singapore,
bringing together local students, practitioners, and faculty. It offers a valuable platform to connect, exchange ideas, and foster collaboration.
<!-- [ <a href="https://www.comp.nus.edu.sg/maps/photos#com1">Map</A> ]
[ <a href="https://www.comp.nus.edu.sg/images/resources/content/mapsvenues/COM1_L2.jpg">Floorplan</A> ]-->
</p>
<p style="text-align: justify;">
Since its inception in 2018, SSNLP has steadily grown in both popularity and influence, with successful editions held in 2018, 2019, 2020, 2022, and 2023.
We look forward to continuing this tradition in 2024 and delivering another impactful experience for all participants.
</p>
<p style="text-align: justify;">
This year's event will be held at the <b>Mapletree Business City, Town Hall Auditorium, (10 Pasir Panjang Road, Singapore 117438,
[<a href="https://maps.app.goo.gl/iPgMZCrtKoc2nYzi7">Google Map</a>], [<a href="images/roadmap.png">Venue Roadmap</a>],
[<a href="images/town-hall-1.jpg">Outdoor photo</a>], [<a href="images/town-hall-2.jpg">Indoor photo</a>])</b>.
Please note that seating is limited. In the event of oversubscription, we may consider virtual attendance for registered participants.
We encourage early bird registration to secure your spot.
<font color="red">The on-site registration deadline is October 31st, 23:59</font>; make sure you complete the registration before this!
</p>
<center>
<p>
<a href="https://forms.office.com/r/4RVt9GXPFD" target="_blank" class="btn disabled btn-secondary">On-site Registration Closed</a>
</p>
</center>
<p style="text-align: justify;">
Our in-person registration is now closed due to high demand. For those interested in virtual attendance, please complete the form below. Please note that priority for all Q&A sessions will be given to on-site attendees.
<font color="red">The virtual registration deadline is Nov 5th, 17:00 SGT</font>.
</p>
<center>
<p>
<a href="https://docs.google.com/forms/d/e/1FAIpQLScIBI41W_D57fg6If_WjncnK0P3-JTWcBZZ0KiOInK0iNTn0g/viewform?usp=sf_link" target="_blank" class="btn disabled btn-secondary">Virtual Registration Closed</a>
</p>
</center>
<!-- <p>Due to fire code restrictions, our venues cannot accommodate additional onsite registrations. However, we have plenty of capacity for virtual attendance. Feel free to request the link for the virtual registration below.
</p> -->
<!-- <p>Right now, our in-person registration is closed, but feel free to make virtual registrations for online attendance by dropping an email. We will send you a Zoom link.</p>
<center>
<p>
<a href="https://docs.google.com/forms/d/e/1FAIpQLScBFhd2XQf8ciLpRcNPAlim3mFXLc4CpDxTn-8Cef3AKd5j9Q/viewform?pli=1" target="_blank" class="btn disabled btn-secondary">Virtual Registration closed</a> -->
<!-- <a href="mailto:[email protected]?subject=SSNLP 2023 Virtual Registration&body=Hi, I'd like to receive the Zoom link for the upcoming SSNLP event on 5 December 2023. Can you send it to me? Thank you!" target="_blank" class="btn btn-primary">Register to get the Virtual Attendance Zoom links</a> -->
</p>
</center>
<h3>Latest news</h3>
<br>
<!-- <p><span class="white_bg"><strong>Dec 19, 2023</strong> — All the Posters and relevant materials can be downloaded from <a href="https://drive.google.com/drive/folders/1Njknx9-8A1Vk2d40vP7Pmce6_6tTDAQ3?usp=sharing-->
<!--">here.</a></span></p>-->
<!-- <p><span class="white_bg"><strong>Nov 30, 2023</strong> — Program is confirmed, see you all</span></p>-->
<p><span class="white_bg"><strong>Nov 1, 2024</strong> — Virtual registration deadline is <font color="red">Nov 5th, 17:00 SGT</font>, please register before this!</span></p>
<p><span class="white_bg"><strong>Oct 30, 2024</strong> — On-site registration deadline is <font color="red">October 31st, 23:59</font>, please register before this!</span></p>
<p><span class="white_bg"><strong>Oct 20, 2024</strong> — Registration is open, please register now</span></p>
<p><span class="white_bg"><strong>Oct 15, 2024</strong> — The date is confirmed: November 6, 2024</span></p>
</div>
</div>
</div>
</section>
<section id="programme" class="bg-light">
<div class="container">
<div class="row">
<div class="col-lg-8 mx-auto">
<h2>Programme</h2>
<p style="text-align: justify;">We've planned to host Oral and Poster sessions for research presentations, and also have invited Keynote Presentations as well as Industry Talks.</p>
<div class="container-table100">
<div class="wrap-table100">
<div class="table100 ver5 m-b-10">
<table data-vertable="ver5">
<thead>
<tr class="row100 head">
<th class="column100 column1" data-column="column1"><strong>Time</strong></th>
<th class="column100 column2" data-column="column2"><strong>Event</strong></th>
</tr>
</thead>
<tbody>
<tr class="row100">
<td class="column100 column1" data-column="column1">09:00 - 09:15</td>
<td class="column100 column2" data-column="column2"><strong>Welcome and Opening
Remarks</strong>
<!-- <br> <em>introducer:</em> [TBD] -->
</td>
</tr>
<tr class="row100">
<td class="column100 column1" data-column="column1">09:15 - 10:15</td>
<td class="column100 column2" data-column="column2">
<a style="color: #D90D0DFF;">
<strong>Remote Keynote 1 (45mins + 15mins Q&A)</strong>
</a>
<br> <em>speaker:</em> <font color="red">Jason Wei</font>
<br> <em>session chair:</em> Roy Lee
</td>
</tr>
<!-- <tr class="row100">
<td class="column100 column1" data-column="column1">09:30 - 10:30</td>
<td class="column100 column2" data-column="column2"><strong>Tea Break</strong></td>
</tr> -->
<tr class="row100">
<td class="column100 column1" data-column="column1">10:15 - 11:45</td>
<td class="column100 column2" data-column="column2">
<a style="color: #d90d0d;">
<strong>Remote Keynote 2 - 4 (each 20mins + 10mins Q&A) </strong>
</a>
<br> <em>speakers:</em> <font color="red">Bing Liu</font> & <font color="red">Yue Zhang</font> & <font color="red">Jing Ma</font>
<br> <em>session chairs:</em> Yang Deng & Anh Tuan Luu
<!-- <em>chaired by:</em> Francis Bond -->
</td>
</tr>
<tr class="row100">
<td class="column100 column1" data-column="column1">11:45 - 12:45</td>
<td class="column100 column2" data-column="column2">
<a style="color: #000000;">
<a style="color: #ffc107;">
<strong>Research Oral Presentation 1 (each 12mins + 3mins Q&A)</strong>
</a>
<br> <em>session chair:</em> Haonan Wang
<!-- <br>
<em>speaker:</em> <font color="red">Heng Ji</font> :: <em>chaired
by:</em> Nancy Chen -->
</td>
</tr>
<tr class="row100">
<td class="column100 column1" data-column="column1">12:45 - 14:00</td>
<td class="column100 column2" data-column="column2">
<a style="color: #007bff;"><strong>Poster Session 1</strong> </a> <br>
<a style="color: #00d754;"><strong>Lunch (w/ Career & Networking Session)</strong> </a>
<!-- <br> <em>speaker:</em> <font color="red">Eduard Hovy</font> ::
<em>chaired by:</em> Li Haizhou -->
</td>
</tr>
<tr class="row100">
<td class="column100 column1" data-column="column1">14:00 - 15:30</td>
<td class="column100 column2" data-column="column2">
<a style="color: #bd00ef;">
<strong>On-Site Industry Talks (each 20mins + 10mins Q&A)</strong>
</a>
<br> <em>speakers:</em> <font color="red">Tianyu Pang</font> &
<font color="red">Wenxuan Zhang</font> &
<font color="red">Taifeng Wang</font>
<br> <em>session chair:</em> Jiaying Wu
</td>
</tr>
<tr class="row100">
<td class="column100 column1" data-column="column1">15:30 - 16:45</td>
<td class="column100 column2" data-column="column2">
<a style="color: #007bff;">
<strong>Poster Session 2</strong>
</a>
<!-- <br> <em>session chair:</em> Yanxia Qin -->
</td>
</tr>
<tr class="row100">
<td class="column100 column1" data-column="column1">15:45 - 16:30</td>
<td class="column100 column2" data-column="column2">
<a style="color: #ffc107;">
<strong>Research Oral Presentation 2 (each 12mins + 3mins Q&A)</strong>
</a>
<br> <em>session chair:</em> Moxin Li
<!-- <br> <em>speaker:</em> <font color="red">Rada
Mihalcea</font> :: <em>chaired by:</em> Francis Bond -->
</td>
</tr>
<tr class="row100">
<td class="column100 column1" data-column="column1">16:30 - 17:00</td>
<td class="column100 column2" data-column="column2">
<a style="color: #2bc94f;">
<strong>Coffee break</strong>
</a>
<!-- <br> <em>session chair:</em> Yanxia Qin -->
</td>
</tr>
<tr class="row100">
<td class="column100 column1" data-column="column1">17:00 - 17:45</td>
<td class="column100 column2" data-column="column2">
<a style="color: #ffc107;">
<strong>Research Oral Presentation 3 (each 12mins + 3mins Q&A)</strong>
</a>
<br> <em>session chair:</em> Zhiyuan Hu
<!-- <br> <em>speaker:</em> <font color="red">Rada
Mihalcea</font> :: <em>chaired by:</em> Francis Bond -->
</td>
</tr>
<!-- -->
<!-- <tr class="row100">-->
<!-- <td class="column100 column1" data-column="column1">14:00 - 15:00</td>-->
<!-- <td class="column100 column2" data-column="column2">-->
<!-- <a style="color: #ffc107;">-->
<!-- <strong>Paper session 3</strong>-->
<!-- </a> -->
<!-- <br> <em>session chair:</em> Taha Aksu -->
<!-- </td>-->
<!-- </tr>-->
<!-- <tr class="row100">-->
<!-- <td class="column100 column1" data-column="column1">15:00 - 16:00</td>-->
<!-- <td class="column100 column2" data-column="column2">-->
<!-- <a style="color: #000000;">-->
<!-- <strong>Keynote 5 and Keynote 6 (each 20mins + 10mins Q&A) </strong>-->
<!-- </a>-->
<!-- <br> <em>speakers:</em> <font color="red">Diyi</font> & <font color="red">Joao</font>-->
<!-- <br> <em>session chair:</em> Kokil Jaidka -->
<!--<!– :: <em>chaired by:</em> Li Haizhou –>-->
<!-- </td>-->
<!-- </tr>-->
<tr class="row100">
<td class="column100 column1" data-column="column1">17:45 - 18:00</td>
<td class="column100 column2" data-column="column2">
<a style="color: #000000;">
<strong>Closing</strong>
</a>
</td>
</tr>
<!-- <tr class="row100">-->
<!-- <td class="column100 column1" data-column="column1">16:30 - 18:30</td>-->
<!-- <td class="column100 column2" data-column="column2">-->
<!-- <a style="color: #000000;">-->
<!-- <strong>Industry session: Speaker presentations (each 20mins + 10mins Q&A)</strong>-->
<!-- </a>-->
<!-- <br> <em>speakers:</em> <font color="red">Daniel</font> & <font color="red">Huda</font> & <font color="red">Alessandro</font> & <font color="red">Lidong</font>-->
<!-- <br> <em>session chairs:</em> Suzanna Sia & Gao Wei -->
<!-- </td>-->
<!-- </tr>-->
<!-- <tr class="row100">
<td class="column100 column1" data-column="column1">18:00 - 18:30</td>
<td class="column100 column2" data-column="column2"><strong>Townhall</strong>
</td>
</tr> -->
<!-- <tr class="row100">
<td class="column100 column1" data-column="column1">17:30 - 18:00</td>
<td class="column100 column2" data-column="column2">
</td>
</tr> -->
</tbody>
</table>
</div>
</div>
</div>
</div>
</div>
</div>
</section>
<section id="speakers" class="bg-light">
<div class="container">
<div class="row">
<div class="col-lg-8 mx-auto">
<h3>Oral & Poster Presentations</h3>
<p style="text-align: justify;">
<!-- Papers are mainly exported from the EMNLP 2023 and ACL 2023.-->
Oral presentations are for 12 minutes plus 3 minutes for immediate questions.
<!-- Posters are short, workshop, work-in-progress papers or last-minute additions to our programme. -->
Poster boards can accommodate (1m x 1m) sized posters, in either portrait or landscape.
We arrange the posters into Poster Session 1 and Poster Session 2, divided as follows, with each containing around 20 posters.
<!-- There will be ample time after a session to engage in the breaks directly after the session. Session chairs should record each session and check with the speakers if they want their post-recorded session made public. Questions will be solicited via crowdsourcing via Padlets. -->
</p>
<div id="Oral" data-toggle="collapse" data-parent="#accordion1">
<a href="#Oral-list" data-toggle="collapse"><b>Click to see the paper list ↓</b></a>
<div class="accordion1" id="accordion1">
<div id="Oral-list" class="collapse" data-parent="#accordion1">
<div class="table ver5 m-b-10">
<table data-vertable="ver5">
<tbody>
<tr class="row100">
<td valign="middle" style="font-weight: bold;">Paper session 1 </td>
<td class="column100" data-column="column1">
<p style="text-align:left; font-size:small">
[1] Zhiyuan Hu, Chumin Liu, Xidong Feng, Yilun Zhao, See-Kiong Ng, Anh Tuan Luu, Junxian He, Pang Wei Koh, Bryan Hooi. <i>Uncertainty of Thoughts: Uncertainty-Aware Planning Enhances Information Seeking in Large Language Models. NeurIPS</i> (<b>Slot: <a style="color: red;">11:45 - 12:00</a></b>)<BR/>
[2] Lin Xu, Zhiyuan Hu, Daquan Zhou, Hongyu Ren, Zhen Dong, Kurt Keutzer, See-Kiong Ng, Jiashi Feng. <i>MAgIC: Investigation of Large Language Model Powered Multi-Agent in Cognition, Adaptability, Rationality and Collaboration. EMNLP main</i> (<b>Slot: <a style="color: red;">12:00 - 12:15</a></b>)<BR/>
[3] Xuan Zhang, Chao Du, Tianyu Pang, Qian Liu, Wei Gao, Min Lin. <i>Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs. NeurIPS</i> (<b>Slot: <a style="color: red;">12:15 - 12:30</a></b>)<BR/>
[4] Xiaobao Wu, Liangming Pan, William Yang Wang, Anh Tuan Luu. <i>AKEW: Assessing Knowledge Editing in the Wild. EMNLP main</i> (<b>Slot: <a style="color: red;">12:30 - 12:45</a></b>)<BR/>
<!-- [1] Ye, Hai, & Xie, Qizhe, & Ng, Hwee Tou. <i>Multi-Source Test-Time Adaptation as Dueling Bandits for Extractive Question Answering</i> (<b>Slot: <a style="color: red;">08:15 - 08:30</a></b>)<BR/> -->
<!-- [2] Zhengyuan Liu, Yong Keong Yap, Hai Leong Chieu and Nancy F. Chen. <i>Guiding Computational Stance Detection with Expanded Stance Triangle Framework</i> (<b>Slot: <a style="color: red;">08:30 - 08:45</a></b>)<BR/>-->
<!-- [3] Ahmed Masry*, Parsa Kavehzadeh*, Xuan Long Do, Enamul Hoque, Shafiq Joty. <i>UniChart: A Universal Vision-language Pretrained Model for Chart Comprehension and Reasoning</i> (<b>Slot: <a style="color: red;">08:45 - 09:00</a></b>)<BR/>-->
<!-- [4] Ibrahim Taha Aksu, Devamanyu Hazarika, Shikib Mehri, Seokhwan Kim, Dilek Hakkani-Tur, Yang Liu, Mahdi Namazifar. <i>CESAR: Automatic Induction of Compositional Instructions for Multi-turn Dialogs</i> (<b>Slot: <a style="color: red;">09:00 - 09:15</a></b>)<BR/>-->
</p>
</td>
</tr>
<tr class="row100">
<td valign="middle" style="font-weight: bold;">Paper session 2 </td>
<td class="column100" data-column="column1">
<p style="text-align:left; font-size:small">
[1] Ming Shan Hee, Aditi Kumaresan, Roy Ka-Wei Lee. <i>Bridging Modalities: Enhancing Cross-Modality Hate Speech Detection with Few-Shot In-Context Learning. EMNLP main</i> (<b>Slot: <a style="color: red;">15:45 - 16:00</a></b>)<BR/>
[2] Yang Deng, Yong Zhao, Moxin Li, See-Kiong Ng, Tat-Seng Chua. <i>Don't Just Say "I Don't Know"! Self-aligning Large Language Models for Responding to Unknown Questions with Explanations. EMNLP main</i> (<b>Slot: <a style="color: red;">16:00 - 16:15</a></b>)<BR/>
[3] Jiahao Ying, Yixin Cao, Yushi Bai, Qianru Sun, Bo Wang, Wei Tang, Zhaojun Ding, Yizhe Yang, Xuanjing Huang, Shuicheng Yan. <i>Automating Dataset Updates Towards Reliable and Timely Evaluation of Large Language Models. NeurIPS</i> (<b>Slot: <a style="color: red;">16:15 - 16:30</a></b>)<BR/>
<!-- [1] Hannan Cao, Liping Yuan, Yuchen Zhang, Hwee Tou Ng. <i>Unsupervised Grammatical Error Correction Rivaling Supervised Methods</i> (<b>Slot: <a style="color: red;">10:30 - 10:45</a></b>)<BR/> -->
<!-- [2] Moxin Li, Wenjie Wang, Fuli Feng, Yixin Cao, Jizhi Zhang, Tat-Seng Chua. <i>Robust Prompt Optimization for Large Language Models Against Distribution Shifts</i> (<b>Slot: <a style="color: red;">10:45 - 11:00</a></b>)<BR/>-->
<!-- [3] Ratish Puduppully, Anoop Kunchukuttan, Raj Dabre, Ai Ti Aw, Nancy F. Chen. <i>Decomposed Prompting for Machine Translation between Related Languages using Large Language Models</i> (<b>Slot: <a style="color: red;">11:00 - 11:15</a></b>)<BR/>-->
<!-- [4] Zhiyuan Liu, Sihang Li, Yanchen Luo, Hao Fei, Yixin Cao, Kenji Kawaguchi, Xiang Wang, Tat-Seng Chua. <i>MolCA: Molecular Graph-Language Modeling with Cross-Modal Projector and Uni-Modal Adapter.</i> (<b>Slot: <a style="color: red;">11:15 - 11:30</a></b>)<BR/>-->
</p>
</td>
</tr>
<tr class="row100">
<td valign="middle" style="font-weight: bold;">Paper session 3 </td>
<td class="column100" data-column="column1">
<p style="text-align:left; font-size:small">
[1] Yew Ken Chia, Guizhen Chen, Weiwen Xu, Luu Anh Tuan, Soujanya Poria, Lidong Bing. <i>Reasoning Paths Optimization: Learning to Reason and Explore From Diverse Paths. EMNLP main</i> (<b>Slot: <a style="color: red;">17:00-17:15</a></b>)<BR/>
[2] Yiran Zhao, Wenyue Zheng, Tianle Cai, Xuan Long Do, Kenji Kawaguchi, Anirudh Goyal, Michael Shieh. <i>Accelerating Greedy Coordinate Gradient and General Prompt Optimization via Probe Sampling. NeurIPS</i> (<b>Slot: <a style="color: red;">17:15-17:30</a></b>)<BR/>
[3] Wenyang Hu, Yao Shu, Zongmin Yu, Zhaoxuan Wu, Xiaoqiang Lin, Zhongxiang Dai, See-Kiong Ng, Bryan Kian Hsiang Low. <i>Localized Zeroth-Order Prompt Optimization. NeurIPS spotlight</i> (<b>Slot: <a style="color: red;">17:30 - 17:45</a></b>)<BR/>
<!-- [1] Jinggui Liang, Lizi Liao. <i>ClusterPrompt: Cluster Semantic Enhanced Prompt Learning for New Intent Discovery</i> (<b>Slot: <a style="color: red;">14:00 - 14:15</a></b>)<BR/> -->
<!-- [2] Zhiqiang Hu, Lei Wang, Yihuai Lan, Wanyu Xu, Ee-Peng Lim, Lidong Bing, Xing Xu, Soujanya Poria, Roy Ka-Wei Lee. <i>LLM-Adapter: An Empirical Study of Adapter-based Parameter-Efficient Fine-Tuning for Large Language Models</i> (<b>Slot: <a style="color: red;">14:15 - 14:30</a></b>)<BR/>-->
<!-- [3] Minzhi Li, Taiwei Shi, Caleb Ziems, Min-Yen Kan, Nancy F Chen, Zhengyuan Liu, Diyi Yang. <i>CoAnnotating: Uncertainty-Guided Work Allocation between Human and Large Language Models for Data Annotation</i> (<b>Slot: <a style="color: red;">14:30 - 14:45</a></b>)<BR/>-->
<!-- [4] Quanyu Long, Wenya Wang, Sinno Jialin Pan. <i>Adapt in Contexts: Retrieval-Augmented Domain Adaptation via In-Context Learning</i> (<b>Slot: <a style="color: red;">14:45 - 15:00</a></b>)<BR/>-->
</p>
</td>
</tr>
<tr class="row100">
<td valign="middle" style="font-weight: bold;">Poster session 1 </td>
<td class="column100" data-column="column1">
<p style="text-align:left; font-size:small">
[1] Hai Ye, Hwee Tou Ng. <i>Preference-Guided Reflective Sampling for Aligning Language Models.</i> (<b>Board: <a style='color: red;'>1</a></b>)<BR/>
[2] Do Xuan Long, Duong Ngoc Yen, Anh Tuan Luu, Kenji Kawaguchi, Min-Yen Kan, Nancy F. Chen. <i>Multi-expert Prompting Improves Reliability, Safety and Usefulness of Large Language Models</i> (<b>Board: <a style='color: red;'>2</a></b>)<BR/>
[3] Naaman Tan, Josef Valvoda, Tianyu Liu, Anej Svete, Yanxia Qin, Min-Yen Kan, Ryan Cotterell. <i>A Fundamental Trade-off in Aligned Language Models and its Relation to Sampling Adaptors</i> (<b>Board: <a style='color: red;'>3</a></b>)<BR/>
[4] Esther Gan*, Yiran Zhao*, Liying Cheng, Mao Yancan, Anirudh Goyal, Kenji Kawaguchi, Min-Yen Kan, Michael Shieh. <i>Reasoning Robustness of LLMs to Adversarial Typographical Errors</i> (<b>Board: <a style='color: red;'>4</a></b>)<BR/>
[5] Hongfu Liu, Yuxi Xie, Ye Wang, Michael Shieh. <i>Advancing Adversarial Suffix Transfer Learning on Aligned Large Language Models</i> (<b>Board: <a style='color: red;'>5</a></b>)<BR/>
[6] Thong Nguyen, Zhiyuan Hu, Xiaobao Wu, Cong-Duy T Nguyen, See-Kiong Ng, Anh Tuan Luu. <i>Encoding and Controlling Global Semantics for Long-form Video Question Answering</i> (<b>Board: <a style='color: red;'>6</a></b>)<BR/>
[7] Gregory Kang Ruey Lau, Xinyuan Niu, Hieu Dao, Jiangwei Chen, Chuan Sheng Foo, Bryan Kian Hsiang Low. <i>Waterfall: Framework for Robust and Scalable Text Watermarking</i> (<b>Board: <a style='color: red;'>7</a></b>)<BR/>
[8] Yunze Xiao, Yujia Hu, Kenny Tsu Wei Choo, Roy Ka-Wei Lee. <i>ToxiCloakCN: Evaluating Robustness of Offensive Language Detection in Chinese with Cloaking Perturbations</i> (<b>Board: <a style='color: red;'>8</a></b>)<BR/>
[9] Miaoyu Li, Haoxin Li, Zilin Du, Boyang Li. <i>Diversify, Rationalize, and Combine: Ensembling Multiple QA Strategies for Zero-shot Knowledge-based VQA</i> (<b>Board: <a style='color: red;'>9</a></b>)<BR/>
[10] Xuan Zhang, Yang Deng, Zifeng Ren, See-Kiong Ng, Tat-Seng Chua. <i>Ask-before-Plan: Proactive Language Agents for Real-World Planning</i> (<b>Board: <a style='color: red;'>10</a></b>)<BR/>
[11] Muhammad Reza Qorib, Alham Fikri Aji, Hwee Tou Ng. <i>Efficient and Interpretable Grammatical Error Correction with Mixture of Experts</i> (<b>Board: <a style='color: red;'>11</a></b>)<BR/>
[12] Fengzhu Zeng, Wenqian Li, Wei Gao, Yan Pang. <i>Multimodal Misinformation Detection by Learning from Synthetic Data with Multimodal LLMs</i> (<b>Board: <a style='color: red;'>12</a></b>)<BR/>
[13] Jiahao Ying, Mingbao Lin, Yixin Cao, Wei Tang, Bo Wang, Qianru Sun, Xuanjing Huang, Shuicheng Yan. <i>LLMs-as-Instructors: Learning from Errors Toward Automating Model Improvement</i> (<b>Board: <a style='color: red;'>13</a></b>)<BR/>
[14] Xinyi Xu, Zhaoxuan Wu, Rui Qiao, Arun Verma, Yao Shu, Jingtan Wang, Xinyuan Niu, Zhenfeng He, Jiangwei Chen, Zijian Zhou, Gregory Kang Ruey Lau, Hieu Dao, Lucas Agussurja, Rachael Hwee Ling Sim, Xiaoqiang Lin, Wenyang Hu, Zhongxiang Dai, Pang Wei Koh, Bryan Kian Hsiang Low. <i>Position Paper: Data-Centric AI in the Age of Large Language Models</i> (<b>Board: <a style='color: red;'>14</a></b>)<BR/>
[15] Ming Shan Hee, Shivam Sharma, RUI CAO, Palash Nandi, Preslav Nakov, Tanmoy Chakraborty, Roy Ka-Wei Lee. <i>Recent Advances in Online Hate Speech Moderation: Multimodality and the Role of Large Models</i> (<b>Board: <a style='color: red;'>15</a></b>)<BR/>
[16] Xu Guo, Zilin Du, Boyang Li, Chunyan Miao. <i>Generating Synthetic Datasets for Few-shot Prompt Tuning</i> (<b>Board: <a style='color: red;'>16</a></b>)<BR/>
[17] Quanyu Long, Yin Wu, Wenya Wang, Sinno Jialin Pan. <i>Does In-Context Learning Really Learn? Rethinking How Large Language Models Respond and Solve Tasks via In-Context Learning</i> (<b>Board: <a style='color: red;'>17</a></b>)<BR/>
[18] Yujia Hu, Zhiqiang Hu, Chun Wei Seah, Roy Ka-Wei Lee. <i>InstructAV: Instruction Fine-tuning Large Language Models for Authorship Verification</i> (<b>Board: <a style='color: red;'>18</a></b>)<BR/>
[19] Zhengyuan Liu, Stella Xin Yin, Geyu Lin, Nancy F. Chen. <i>Personality-aware Student Simulation for Conversational Intelligent Tutoring Systems</i> (<b>Board: <a style='color: red;'>19</a></b>)<BR/>
[20] Moxin Li, Wenjie Wang, Fuli Feng, Fengbin Zhu, Qifan Wang, Tat-Seng Chua. <i>Think Twice Before Trusting: Self-Detection for Large Language Models through Comprehensive Answer Reflection</i> (<b>Board: <a style='color: red;'>20</a></b>)<BR/>
<!-- [2] Fengzhu Zeng, Wei Gao. <i>Prompt to be Consistent is Better than Self-Consistent? Few-Shot and Zero-Shot Fact Verification with Pre-trained Language Models</i> (<b>Board: <a style="color: red;">P102</a></b>)<BR/> -->
<!-- [3] Shuo Sun, Yuchen Zhang, Jiahuan Yan, Yuze GAO, Donovan Ong, Bin Chen, Jian Su. <i>Battle of the Large Language Models: Dolly vs LLaMA vs Vicuna vs Guanaco vs Bard vs ChatGPT - A Text-to-SQL Parsing Comparison</i> (<b>Board: <a style="color: red;">P105</a></b>)<BR/>-->
<!-- [4] Liangming Pan, Alon Albalak, Xinyi Wang, William Yang Wang. <i>Logic-LM: Empowering large language models with symbolic solvers for faithful logical reasoning</i> (<b>Board: <a style="color: red;">P106</a></b>)<BR/>-->
<!-- [5] Jiaxi Li and Wei Lu. <i>Contextual Distortion Reveals Constituency: Masked Language Models are Implicit Parsers</i> (<b>Board: <a style="color: red;">P109</a></b>)<BR/>-->
<!-- [6] Shaz Furniturewala, Abhinav Java, Surgan Jandial, Simra Shahid, Pragyan Banerjee, Balaji Krishnamurthy, Sumit Bhatia and Kokil Jaidka. <i>Evaluating the Efficacy of Prompting Techniques for Debiasing Language Model Outputs</i> (<b>Board: <a style="color: red;">P110</a></b>)<BR/>-->
<!-- [7] Tan, Qingyu, & Xu, Lu, & Bing, Lidong, & Ng, Hwee Tou. <i>Class-Adaptive Self-Training for Relation Extraction with Incompletely Annotated Training Data</i> (<b>Board: <a style="color: red;">P103</a></b>)<BR/>-->
<!-- [8] Yixi Ding, Yanxia Qin, Qian Liu, Min Yen Kan. <i>CocoSciSum: A Scientific Summarization Toolkit with Compositional Controllability</i> (<b>Board: <a style="color: red;">P104</a></b>)<BR/>-->
<!-- [9] Shaurya Rohatgi, Yanxia Qin, Benjamin Aw, Niranjana Unnithan, Min-Yen Kan. <i>The ACL OCL Corpus: Advancing Open Science in Computational Linguistics</i> (<b>Board: <a style="color: red;">P107</a></b>)<BR/>-->
<!-- [10] Xinyuan Lu, Liangming Pan, Qian Liu, Preslav Nakov, Min-Yen Kan. <i>SCITAB: A Challenging Benchmark for Compositional Reasoning and Claim Verification on Scientific Tables</i> (<b>Board: <a style="color: red;">P108</a></b>)<BR/>-->
<!-- [11] Yeo, Gerard., Jaidka K. <i>The PEACE-Reviews dataset: Modeling Cognitive Appraisals in Emotion Text Analysis</i> (<b>Board: <a style="color: red;">P111</a></b>)<BR/>-->
<!-- [12] Kankan Zhou, Eason Lai, Wei Bin Au Yeong, Kyriakos Mouratidis, Jing Jiang. <i>ROME: Evaluating Pre-trained Vision-Language Models on Reasoning beyond Visual Common Sense</i> (<b>Board: <a style="color: red;">P112</a></b>)<BR/>-->
<!-- [13] Xiaobing Sun, Jiaxi Li, and Wei Lu. <i>Unraveling Feature Extraction Mechanisms in Neural Networks</i> (<b>Board: <a style="color: red;">P113</a></b>)<BR/>-->
</p>
</td>
</tr>
<tr class="row100">
<td valign="middle" style="font-weight: bold;">Poster session 2 </td>
<td class="column100" data-column="column1">
<p style="text-align:left; font-size:small">
[1] Suzanna Sia, David Mueller, Kevin Duh. <i>Where does In context learning happen in LLMs</i> (<b>Board: <a style='color: red;'>1</a></b>)<BR/>
[2] Hao Fei, Shengqiong Wu, Hanwang Zhang, Tat-Seng Chua, Shuicheng Yan. <i>VITRON: A Unified Pixel-level Vision LLM for Understanding, Generating, Segmenting, Editing</i> (<b>Board: <a style='color: red;'>2</a></b>)<BR/>
[3] Zijian Zhou, Xiaoqiang Lin, Xinyi Xu, Alok Prakash, Daniela Rus, Bryan Kian Hsiang Low. <i>DETAIL: Task DEmonsTration Attribution for Interpretable In-context Learning</i> (<b>Board: <a style='color: red;'>3</a></b>)<BR/>
[4] Zhaoxuan Wu, Xiaoqiang Lin, Zhongxiang Dai, Wenyang Hu, Yao Shu, See-Kiong Ng, Patrick Jaillet, Bryan Kian Hsiang Low. <i>Prompt Optimization with EASE? Efficient Ordering-aware Automated Selection of Exemplars</i> (<b>Board: <a style='color: red;'>4</a></b>)<BR/>
[5] Zhuanghua Liu, Luo Luo, Bryan Kian Hsiang Low. <i>Gradient-Free Methods for Nonconvex Nonsmooth Stochastic Compositional Optimization</i> (<b>Board: <a style='color: red;'>5</a></b>)<BR/>
[6] Mingzhe Du, Anh Tuan Luu, Bin Ji, Qian Liu, See-Kiong Ng. <i>Mercury: A Code Efficiency Benchmark for Code Large Language Models</i> (<b>Board: <a style='color: red;'>6</a></b>)<BR/>
[7] Bhardwaj, Rishabh, Do Duc Anh, Soujanya Poria. <i>Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned Language Models through Task Arithmetic</i> (<b>Board: <a style='color: red;'>7</a></b>)<BR/>
[8] Ruichao Yang, Wei Gao, Jing Ma, Hongzhan Lin, Bo Wang. <i>Reinforcement Tuning for Detecting Stances and Debunking Rumors Jointly with Large Language Models</i> (<b>Board: <a style='color: red;'>8</a></b>)<BR/>
[9] Fengzhu Zeng, Wei Gao. <i>JustiLM: Few-Shot Justification Generation for Explainable Fact-Checking of Real-world Claims</i> (<b>Board: <a style='color: red;'>9</a></b>)<BR/>
[10] Xinze Li, Yixin Cao, Liangming Pan, Yubo Ma, Aixin Sun. <i>Towards Verifiable Generation: A Benchmark for Knowledge-aware Language Model Attribution</i> (<b>Board: <a style='color: red;'>10</a></b>)<BR/>
[11] Jiahao Ying, Yixin Cao, Kai Xiong, Yidong He, Long Cui, Yongbin Liu. <i>Intuitive or Dependent? Investigating LLMs' Behavior Style to Conflicting Prompts</i> (<b>Board: <a style='color: red;'>11</a></b>)<BR/>
[12] Do Xuan Long*, Yiran Zhao*, Hannah Brown*, Yuxi Xie, James Zhao, Nancy Chen, Kenji Kawaguchi, Michael Shieh, Junxian He. <i>Prompt Optimization via Adversarial In-Context Learning</i> (<b>Board: <a style='color: red;'>12</a></b>)<BR/>
[13] Jundong Xu, Hao Fei, Liangming Pan, Qian Liu, Mong-Li Lee, Wynne Hsu. <i>Faithful Logical Reasoning via Symbolic Chain-of-Thought</i> (<b>Board: <a style='color: red;'>13</a></b>)<BR/>
[14] Cunxiao Du et al.. <i>GliDe with a CaPE: A Low-Hassle Method to Accelerate Speculative Decoding</i> (<b>Board: <a style='color: red;'>14</a></b>)<BR/>
[15] Xiangming Gu*, Xiaosen Zheng*, Tianyu Pang*, Chao Du, Qian Liu, Ye Wang, Jing Jiang, Min Lin. <i>Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast</i> (<b>Board: <a style='color: red;'>15</a></b>)<BR/>
[16] Shengqiong Wu, Hao Fei, Leigang Qu, Wei Ji, Tat-Seng Chua. <i>NExT-GPT: Any-to-Any Multimodal LLM</i> (<b>Board: <a style='color: red;'>16</a></b>)<BR/>
[17] Hao Fei, Shengqiong Wu, Wei Ji, Hanwang Zhang, Meishan Zhang, Mong-Li Lee, Wynne Hsu. <i>Video-of-thought: Step-by-step video reasoning from perception to cognition</i> (<b>Board: <a style='color: red;'>17</a></b>)<BR/>
[18] Suzanna Sia, Alexandra Delucia, Kevin Duh. <i>Anti-Lm Decoding for zeroshot in context MT</i> (<b>Board: <a style='color: red;'>18</a></b>)<BR/>
[19] Anthony Tiong, Junqi Zhao, et al.. <i>What Are We Measuring When We Evaluate Large Vision-Language Models? An Analysis of Latent Factors and Biases</i> (<b>Board: <a style='color: red;'>19</a></b>)<BR/>
[20] Yidan Sun, Qin Chao, Boyang Li. <i>Event Causality Is Key to Computational Story Understanding</i> (<b>Board: <a style='color: red;'>20</a></b>)<BR/>
[21] Brian Formento, Wenjie Feng, Chuan Sheng Foo, Luu Anh Tuan, See-Kiong Ng. <i>SemRoDe: Macro Adversarial Training to Learn Representations That are Robust to Word-Level Attacks</i> (<b>Board: <a style='color: red;'>21</a></b>)<BR/>
[22] Fu Jinlan, Ng See-Kiong, Jiang Zhengbao, Liu Pengfei. <i>GPTScore: Evaluate as You Desire</i> (<b>Board: <a style='color: red;'>22</a></b>)<BR/>
[23] Meng Luo, Hao Fei, Bobo Li, Shengqiong Wu, Qian Liu, Soujanya Poria, Erik Cambria, Mong-Li Lee, Wynne Hsu. <i>PanoSent: A Panoptic Sextuple Extraction Benchmark for Multimodal Conversational Aspect-based Sentiment Analysis</i> (<b>Board: <a style='color: red;'>23</a></b>)<BR/>
[24] Yubo Ma, Yuhang Zang, Liangyu Chen, Meiqi Chen, Yizhu Jiao, Xinze Li, Xinyuan Lu, Ziyu Liu, Yan Ma, Xiaoyi Dong, Pan Zhang, Liangming Pan, Yu-Gang Jiang, Jiaqi Wang, Yixin Cao, Aixin Sun. <i>MMLongBench-Doc: Benchmarking Long-context Document Understanding with Visualizations</i> (<b>Board: <a style='color: red;'>24</a></b>)<BR/>
[25] Yajing Yang, Qian Liu, Min-Yen Kan. <i>DataTales: A Benchmark for Real-World Intelligent Data Narration</i> (<b>Board: <a style='color: red;'>25</a></b>)<BR/>
<!-- [2] Rui Cao, Jing Jiang. <i>Modularized Zero-shot VQA with Pre-trained Models</i> (<b>Board: <a style="color: red;">P202</a></b>)<BR/>-->
<!-- [3] Bin Wang, Zhengyuan Liu, Nancy F. Chen. <i>Instructive Dialogue Summarization with Query Aggregations</i> (<b>Board: <a style="color: red;">P205</a></b>)<BR/>-->
<!-- [4] Huy Quang Dao, Lizi Liao, Dung D. Le, Yuxiang Nie. <i>Reinforced Target-driven Conversational Promotion</i> (<b>Board: <a style="color: red;">P206</a></b>)<BR/>-->
<!-- [5] Ibrahim Taha Aksu, Min-Yen Kan and Nancy F. Chen. <i>Zero-shot Adaptive Prefixes for Dialogue State Tracking Domain Adaptation</i> (<b>Board: <a style="color: red;">P209</a></b>)<BR/>-->
<!-- [6] Guangsheng Bao, Zhiyang Teng, Hao Zhou, Jianhao Yan, Yue Zhang. <i>Non-Autoregressive Document-Level Machine Translation</i> (<b>Board: <a style="color: red;">P210</a></b>)<BR/>-->
<!-- [7] Liangming Pan, Xiaobao Wu, Xinyuan Lu, Anh Tuan Luu, William Yang Wang, Min-Yen Kan, Preslav Nakov. <i>Fact-Checking Complex Claims with Program-Guided Reasoning</i> (<b>Board: <a style="color: red;">P203</a></b>)<BR/>-->
<!-- [8] Muhammad Reza Qorib, Hwee Tou Ng. <i>System Combination via Quality Estimation for Grammatical Error Correction</i> (<b>Board: <a style="color: red;">P204</a></b>)<BR/>-->
<!-- [9] Tan, Qingyu, & Ng, Hwee Tou, & Bing, Lidong. <i>Towards Benchmarking and Improving the Temporal Reasoning Capability of Large Language Models</i> (<b>Board: <a style="color: red;">P207</a></b>)<BR/>-->
<!-- [10] Ruichao Yang, Wei Gao, Jing Ma, Zhiwei Yang. <i>WSDMS: Debunk Fake News via Weakly Supervised Detection of Misinforming Sentences with Contextualized Social Wisdom</i> (<b>Board: <a style="color: red;">P208</a></b>)<BR/>-->
<!-- [11] Ratish Puduppully, Parag Jain, Nancy F. Chen and Mark Steedman. <i>Multi-Document Summarization with Centroid-Based Pretraining</i> (<b>Board: <a style="color: red;">P211</a></b>)<BR/>-->
<!-- [12] Mathieu Ravaut, Shafiq Joty, Nancy F. Chen. <i>Unsupervised Summarization Re-ranking</i> (<b>Board: <a style="color: red;">P212</a></b>)<BR/>-->
<!-- [13] Zhiqiang Hu, Nancy F. Chen, Roy Ka-Wei Lee. <i>Adapter-TST: A Parameter Efficient Method for Multiple-Attribute Text Style Transfer</i> (<b>Board: <a style="color: red;">P213</a></b>)<BR/>-->
</p>
</td>
</tr>
</tbody>
</table>
</div>
</div>
</div>
</div>
</div>
</div>
</section>
<section id="speakers" class="bg-light">
<div class="container">
<div class="row">
<div class="col-lg-8 mx-auto">
<h3>Keynote Speakers</h3>
<p>
The following speakers from both academia and industry are invited to give keynotes at SSNLP 2024.
Please click the profile image to view the detailed description of the talk.
</p>
<div class="accordion" id="accordion">
<div class="table-responsive">
<table class="table">
<tbody>
<tr align="center">
<td>
<a href="#Preslav" data-toggle="collapse">
<figure>
<img src="images/speaker/jason-wei.png" class="hover2" height="185">
<figcaption><h5>Jason Wei</h5></figcaption>
</figure>
</a>
</td>
<td>
<a href="#Farah" data-toggle="collapse">
<figure>
<img src="images/speaker/bing-liu.png" class="hover2" align="center"
height="185">
<figcaption>
<h5>Bing Liu</h5>
</figcaption>
</figure>
</a>
</td>
<td>
<a href="#Yue" data-toggle="collapse">
<figure>
<img src="images/speaker/yuezhang.png" class="hover2" align="center"
height="185">
<figcaption>
<h5>Yue Zhang</h5>
</figcaption>
</figure>
</a>
</td>
<td>
<a href="#jingma" data-toggle="collapse">
<figure>
<img src="images/speaker/jing-ma.jpg" class="hover2" align="center"
height="185">
<figcaption>
<h5>Jing Ma</h5>
</figcaption>
</figure>
</a>
</td>
<!-- <td>
<a href="#Vivian" data-toggle="collapse">
<figure>
<img src="images/speaker/Vivian-Chen.jpg" class="hover2" height="185">
<figcaption><h5>Vivian Chen</h5></figcaption>
</figure>
</a>
</td>
<td>
<a href="#Tanya" data-toggle="collapse">
<figure>
<img src="images/speaker/tanya-goyal.jpeg" class="hover2"
height="185">
<figcaption><h5>Tanya Goyal</h5></figcaption>
</figure>
</a>
</td> -->
</tr>
</tbody>
</table>
</div>
<div id="Preslav" class="collapse" data-parent="#accordion">
<strong>Title: </strong>Scaling paradigms for large language models
<br>
<strong>Speaker: </strong><a href="https://www.jasonwei.net/">Jason Wei @ OpenAI</a>
<br>
<br>
<p style="text-align: justify;">
<strong> Abstract: </strong>
In this talk I will tell you about the role of scaling in the past five years of artificial intelligence.
In the first scaling paradigm, which started around five years ago, our field scaled large language models by training with more compute on more data. Such scaling led to the success of ChatGPT and other AI chat engines, which were surprisingly capable and general purpose. With the release of OpenAI o1, we are at the beginning of a new paradigm where we do not just scale training time compute, but we also scale test-time compute. These new models are trained via reinforcement learning on chain-of-thought reasoning, and by thinking harder for more-challenging tasks can solve even competition-level math and programming problems.
</p>
<br>
<p style="text-align: justify;">
<strong>Bio:</strong>
Dr. Jason Wei is an AI researcher based in San Francisco. He currently works at OpenAI, where he contributed to OpenAI o1, a frontier model trained to do chain-of-thought reasoning via reinforcement learning. From 2020 to 2023, Jason was a research scientist at Google Brain, where his work popularized chain-of-thought prompting, instruction tuning, and emergent phenomena.
</p><br>
<br>
</div>
<div id="Farah" class="collapse" data-parent="#accordion">
<strong>Title: </strong>Lifelong Learning Dialogue Systems
<br>
<strong>Speaker: </strong><a href="https://www.cs.uic.edu/~liub/">Bing Liu @ UIC</a>
<br>
<br>
<p style="text-align: justify;"> <strong> Abstract: </strong>
Dialogue systems, commonly known as chatbots, have gained escalating popularity in recent times due to their wide-spread applications in carrying out chit-chat conversations with users and task-oriented dialogues to accomplish various user tasks. Existing chatbots are usually trained from pre-collected and manually labeled data. Many also use manually compiled knowledge bases (KBs). Their ability to understand natural language is still limited. Typically, they need to be constantly improved by engineers with more labeled data and more manually compiled knowledge. In this talk, I would like to introduce the new paradigm of lifelong learning dialogue systems to endow chatbots the ability to learn continually by themselves through their own self-initiated interactions with their users and working environments. As the systems chat more and more with users, they become more and more knowledgeable and better and better at conversing.
</p>
<br>
<p style="text-align: justify;"> <strong>Bio:</strong>
Dr. Bing Liu is a Distinguished Professor and Peter L. and Deborah K. Wexler Professor of Computing at the University of Illinois Chicago. He received his Ph.D. in Artificial Intelligence (AI) from the University of Edinburgh. His current research interests include continual/lifelong learning, lifelong learning dialogue systems, sentiment analysis, machine learning and natural language processing. He has published extensively in prestigious conferences and journals and authored five books: one about lifelong machine learning, one about lifelong learning dialogue systems, two about sentiment analysis, and one about Web mining. Three of his papers have received the Test-of-Time awards, and another one received Test-of-Time honorable mention. Some of his works have also been widely reported in popular and technology press internationally. He served as the Chair of ACM SIGKDD from 2013-2017 and as program chair of many leading data mining conferences. He is also the winner of 2018 ACM SIGKDD Innovation Award, and is a Fellow of ACM, AAAI, and IEEE.
</p><br>
<br>
</div>
<div id="Yue" class="collapse" data-parent="#accordion">
<strong>Title: </strong>AutoSurvey: Large Language Models Can Automatically Write Surveys
<br>
<strong>Speaker: </strong><a href="https://frcchang.github.io/ ">Yue Zhang @ Westlake Unv</a>
<br>
<br>
<p style="text-align: justify;"> <strong> Abstract: </strong>
This talk introduces AutoSurvey, a speedy and well-organized methodology for automating the creation of comprehensive literature surveys in rapidly evolving fields like artificial intelligence. Traditional survey paper creation faces challenges due to the vast volume and complexity of information, prompting the need for efficient survey methods. While large language models (LLMs) offer promise in automating this process, challenges such as context window limitations, parametric knowledge constraints, and the lack of evaluation benchmarks remain. AutoSurvey addresses these challenges through a systematic approach that involves initial retrieval and outline generation, subsection drafting by specialized LLMs, integration and refinement, and rigorous evaluation and iteration. Our contributions include a comprehensive solution to the survey problem, a reliable evaluation method, and experimental validation demonstrating AutoSurvey's effectiveness.
</p>
<br>
<p style="text-align: justify;"> <strong>Bio:</strong>
Dr. Yue Zhang is a tenured Professor at Westlake University. His research interests include NLP and its underlying machine learning algorithms. His major contributions to the field include psycholinguistically motivated machine learning algorithm, learning-guided beam search for structured prediction, pioneering neural NLP models including graph LSTM, and OOD generalization for NLP. He authored the Cambridge University Press book ``Natural Language Processing -- a Machine Learning Perspective''. He is the PC co-chair for CCL 2020 and EMNLP 2022, and action editor for Transactions for ACL. He also served as associate editor for IEEE/ACM Transactions of Audio Speech and Language Processing (TASLP), ACM Transactions on Asian and Low-Resource Languages (TALLIP), IEEE Transactions on Big Data (TBD) and Computer, Speech and Language (CSL). He won the best paper awards of IALP 2017 and COLING 2018, best paper honorable mention of SemEval 2020, and best paper nomination for ACL 2018 and ACL 2023.
</p><br>
<br>
</div>
<div id="jingma" class="collapse" data-parent="#accordion">
<strong>Title: </strong>Large Language Model for Social Safety
<br>
<strong>Speaker: </strong><a href="https://majingcuhk.github.io/">Jing Ma @ HKBU</a>
<br>
<br>
<p style="text-align: justify;"> <strong> Abstract: </strong>
A chaotic phenomenon characterized as the massive spread of toxic content (such as misinformation, harmful memes, etc) has become increasingly a daunting issue in human society. Recent advances in large language models (LLMs) and vision-language models (VLMs) offer transformative opportunities for enhancing social safety on digital platforms. This talk delves into innovative methods focusing on rumor detection, explainable fake news detection, harmful meme detection, and sarcasm detection. We explore LLM-based approaches for detecting textural rumors and fake news, highlighting how LLMs can flag misinformation and provide justifications behind the detection. Moving to multimodal challenges, we examine the detection of harmful memes and sarcasm tasks. VLMs can capture implicit clues by analyzing both visual and textual signals. This talk aims to provide insights for deploying advanced AI responsibly to address the growing challenges of safety issues.
</p>
<br>
<p style="text-align: justify;"> <strong>Bio:</strong>
Dr. Ma Jing is currently an Assistant Professor at Department of Computer Science, Hong Kong Baptist University. She received her PhD at The Chinese University of Hong Kong at 2020. She has long-term and strong research interests in Natural Language Processing, Social Network Analysis and Mining, Fact-Checking, Information Retrieval, Large Language Model and Vision Language Model. She has co-authored more than 50 publications in refereed journals and conferences, including ACL, WWW, EMNLP, IJCAI, CIKM, TKDE and TIST, with more than 5400 citations so far. She was recognised as one of “The 2022 Women in AI” by Aminer, “The World’s top 2% scientists” released by Stanford University, and her paper was selected as Top Five Outstanding TIST Articles. During 2018.12-2019.08, she was a visiting scholar at Nanyang Technological University, Singapore. During 2019.12-2020.02, she was a visiting scholar at Institute for Basic Science, South Korea. In recent years, she served as Area Chair for AACL 2023, NAACL 2024, ACL 2024, EMNLP 2024 and NLPCC 2024; Program Committee Member for WSDM 2023, WWW2021 2023, AAAI 2019-2021, ACL 2019, EMNLP 2019, etc, and was invited to review journal papers such as TIST, TKDE, TOMM, TNNLS, TPAMI, etc.
</p><br>
<br>
</div>
<!-- <div id="Vivian" class="collapse" data-parent="#accordion">
<strong>Title: </strong> From Bots to Buddies: Making Conversational Agents More Human-Like
<br>
<strong>Speaker: </strong><a href="https://www.csie.ntu.edu.tw/~yvchen/">Vivian Chen</a>
<br>
<br>
<p style="text-align: justify;"> <strong> Abstract: </strong>
While today's conversational agents are equipped with impressive capabilities, there remains a clear distinction between the intuitive prowess of humans and the operational limits of machines. An example of this disparity is evident in the human ability to infer implicit intents from users' utterances, subsequently guiding conversations toward specific topics or recommending appropriate tasks or products. This talk aims to elevate conversational agents to a more human-like realm, enhancing user experience and practicality. By exploring innovative strategies and frameworks that leverages commonsense knowledge, we delve into the potential ways conversational agents can evolve to offer more seamless, contextually aware, and user-centric interactions. The goal is to not only close the gap between human and machine interactions but also to unlock new possibilities in how conversational agents can be utilized in our daily lives.
</p><br>
<p style="text-align: justify;"> <strong>Bio:</strong>
Dr. Yun-Nung (Vivian) Chen is currently an associate professor in the Department of Computer Science & Information Engineering at National Taiwan University. She earned her Ph.D. degree from Carnegie Mellon University, where her research interests focus on spoken dialogue systems and natural language processing. She was recognized as the Taiwan Outstanding Young Women in Science and received Google Faculty Research Awards, Amazon AWS Machine Learning Research Awards, MOST Young Scholar Fellowship, and FAOS Young Scholar Innovation Award. Her team was selected to participate in the first Alexa Prize TaskBot Challenge in 2021. Prior to joining National Taiwan University, she worked in the Deep Learning Technology Center at Microsoft Research Redmond.
</p><br>
<br>
</div> -->
<!-- <div id="Tanya" class="collapse" data-parent="#accordion">
<strong>Title: </strong>Evaluation in the era of GPT-4
<br>
<strong>Speaker: </strong><a href="https://tagoyal.github.io/">Tanya Goyal</a>
<br>
<br>
<p style="text-align: justify;"> <strong> Abstract:</strong>
As large language models become more embedded in user applications, there is a push to align their outputs with human preferences. But human preferences are highly subjective, making both model alignment and evaluation extremely challenging. In this talk, I will first outline work that highlights this subjectivity, for a relatively well-defined tasks like summarization, and its effects on downstream model evaluations. Next, I will discuss how effectively trained models can capture human preferences and the impact of integrating these models into RLHF pipelines.
</p>
<br>
<p style="text-align: justify;"> <strong>Bio:</strong>
Dr. Tanya Goyal is an incoming (Fall 2024) assistant professor of Computer Science at Cornell University. For the 2023-2024 academic year, she is a postdoctoral researcher at the Princeton Language and Intelligence (PLI) group. Her current research focuses on designing scalable and cost-effective evaluation techniques for LLMs. Particularly, she is interested in understanding and modeling the subjectivity in human feedback, and how this affects both evaluation and training of LLMs at scale. Previously, she received her Ph.D. in computer science from the University of Texas at Austin in 2023, advised by Dr. Greg Durrett. Her thesis research focused on building tools to automatically detect attribution errors in generated text.
</p>
<br>
<br>
</div>
-->
<div class="table-responsive">
<table class="table">
<tbody>
<tr align="center">
<td>
<a href="#Diyi" data-toggle="collapse">
<figure>
<img src="images/speaker/tianyu-pang.jpg" class="hover2"
height="185">
<figcaption><h5>Tianyu Pang</h5></figcaption>
</figure>
</a>
</td>
<td>
<a href="#Joao" data-toggle="collapse">
<figure>
<img src="images/speaker/wenxuan-zhang.jpg" class="hover2" height="185">
<figcaption><h5>Wenxuan Zhang</h5></figcaption>
</figure>
</a>
</td>
<td>
<a href="#Daniel" data-toggle="collapse">
<figure>
<img src="images/speaker/taifeng-wang.png" class="hover2"
height="185">
<figcaption><h5>Taifeng Wang</h5></figcaption>
</figure>
</a>
</td>
<!-- <td>
<a href="#Huda" data-toggle="collapse">
<figure>
<img src="images/speaker/HudaKhayrallah.jpg" class="hover2"
height="185">
<figcaption><h5>Huda Khayrallah</h5></figcaption>
</figure>
</a>
</td> -->
</tr>
</tbody>
</table>
</div>
<div id="Diyi" class="collapse" data-parent="#accordion">
<strong>Title: </strong> Your LLM is Secretly a Fool and You Should Treat it Like One
<br>
<strong>Speaker: </strong><a href="https://p2333.github.io/ ">Tianyu Pang @ Sea AI</a>
<br>
<br>
<p style="text-align: justify;"> <strong> Abstract:</strong>
In this talk, I will present our recent works on jailbreaking/cheating LLMs and multimodal LLMs (MLLMs). This involves a quick overview of adversarial attacks and shows how LLMs/MLLMs facilitate much more flexible attacking strategies. For examples, we show that a null model that always returns a constant output can achieve a 86.5% LC win rate on AlpacaEval 2.0; we could also jailbreak one million MLLM agents exponentially fast in, say, 5 minutes.
</p>
<br>
<p style="text-align: justify;"> <strong>Bio:</strong>
Dr. Tianyu Pang is a Senior Research Scientist at Sea AI Lab. He received Ph.D. and B.S. degrees from Tsinghua University. His research interests span the areas of machine learning, including Trustworthy AI and Generative Models. He has published over 40 papers on top-tier conferences and journals including ICML/NeurIPS/ICLR and CVPR/ICCV/ECCV/TPAMI. His published papers have received over 9,000 citations. He is a recipient of Microsoft Research Asia Fellowship (2020), Baidu Scholarship (2020), NVIDIA Pioneering Research Award (2018), Zhong Shimo Scholarship (2020), CAAI Outstanding Doctoral Dissertation Award (2023), WAIC Rising Star Award (2023), and World's Top 2% Scientists (2024).
</p><br>
<br>
</div>
<div id="Joao" class="collapse" data-parent="#accordion">
<strong>Title: </strong>Auto-Arena: Towards Fully Automated LLM Evaluations
<br>
<strong>Speaker: </strong><a href="https://isakzhang.github.io/">Wenxuan Zhang @ DAMO</a>
<br>
<br>
<p style="text-align: justify;"> <strong> Abstract: </strong>
As large language models (LLMs) rapidly evolve, the challenge of evaluating their capabilities becomes increasingly crucial. In this talk, I will discuss the paradigm shift in LLM evaluation, tracing its evolution from traditional static benchmark-based methods to the LLM-as-a-judge approach, and ultimately to the renowned Chatbot Arena platform based on human voting. Throughout this journey, we observe a trend towards automation in various components of the evaluation process. Building on this trend, I will introduce our innovative solution: the Auto-Arena for LLMs. This automated evaluation framework leverages LLM-based agents to streamline the entire assessment process, from generating questions and participating in debates to evaluating one another within a committee. Remarkably, the Auto-Arena produces results that exhibit state-of-the-art correlation with human preferences—all without human intervention. I will conclude by sharing interesting findings from this project and exploring potential future directions in the realm of automated LLM evaluation and LLM improvements.
</p><br>
<p style="text-align: justify;"> <strong>Bio:</strong>
Dr. Wenxuan Zhang is currently a research scientist at Alibaba DAMO Academy in Singapore. He received his Ph.D. degree from the Chinese University of Hong Kong and then joined Alibaba Singapore with the Ali Star award. His primary research areas are natural language processing (NLP) and trustworthy AI. His research aims to advance NLP models that are inclusive, supporting diverse languages and cultures through multilingual language models, while also trustworthy by improving the safety and robustness of the models. He has published over 40 papers in top-tier AI conferences and journals, including ICLR, NeurIPS, ACL, EMNLP, SIGIR, WWW, TOIS, and TKDE. He is the core tech lead of the SeaLLMs project (LLMs specialized for Southeast Asian languages), which has received significant community attention with over 200k downloads. He also regularly serves on the (senior) program committees of multiple leading conferences and journals.
</p><br>
<br>
</div>
<div id="Daniel" class="collapse" data-parent="#accordion">
<strong>Title: </strong> Train Large-Scale Language Model with High-Quality Data
<br>
<strong>Speaker: </strong><a href="https://www.linkedin.com/in/taifeng-wang-61783137/?originalSubdomain=cn">Taifeng Wang @ ByteDance</a>
<br>
<br>
<p style="text-align: justify;"> <strong> Abstract:</strong>
In the rapidly evolving field of artificial intelligence, training large-scale language models has emerged as a crucial area of research and development. Today's talk focuses on the significance of training large scale language models with high-quality data. As language models continue to grow in size and complexity, the quality of the training data becomes paramount. High-quality data ensures more accurate and reliable language understanding and generation. It enables the model to capture nuanced language patterns, semantic relationships, and context. We will talk about the various techniques employed to train large-scale language models effectively.
</p><br>
<p style="text-align: justify;"> <strong>Bio:</strong>
Taifeng Wang is currently a principal researcher at ByteDance. He got his master's degree from the University of Science and Technology of China. He is an expert on AI Algorithms with over 20 years of R&D experience, who has served as Principal Researcher at Microsoft Research Asia, AI Director of the Intelligent Engine Department at Ant Financial, and Head of AI Algorithms at Biomap. His research spans the entire spectrum from natural language processing (NLP), graph learning, distributed machine learning, multimodal learning, and AI-driven biopharmaceuticals. He has served as a Senior Area Chair for ACL and is the brilliant mind behind LightGBM, famously known as a "Kaggle leaderboard weapon". Holding 17 Chinese patents and 20 U.S. patents, Taifeng's research has garnered over 16,000 citations on Google Scholar. His team is now working on building the Large Language foundation model for ByteDance. </p><br>
<br>
</div>
<!-- <div id="Huda" class="collapse" data-parent="#accordion">
<strong>Title: </strong> Perplexity-Driven Case Encoding Needs Augmentation for CAPITALIZATION Robustness
<br>
<strong>Speaker: </strong><a href="https://khayrallah.github.io/">Huda Khayrallah</a>
<br>
<br>
<p style="text-align: justify;"> <strong> Abstract:</strong>
For most NLP models, upper and lower case letters are represented with distinct code-points. In contrast, most people naturally connect upper and lower-cased letters as highly similar and therefore expect NLP models to perform similarly on inputs that only differ in casing. However, that is often not the case, and NLP models are often unstable on non-standard casings. Subword segmentation methods (e.g., BPE (Sennrich et al., 2016) and SPM (Kudo and Richardson, 2018)) handle the sparsity introduced by a variety of linguistic features (e.g. concatenative morphology) by learning a segmentation of words into shorter sequences of characters. However, such methods do not currently handle the sparsity introduced by casing well and can lead to terrible quality on ALL CAPS data. Prior work (Berard et al., 2019; Etchegoyhen and Gete, 2020) overcame the quality drop in machine translation but did so in a way that breaks the encoding optimality of perplexity driven methods, leading to impractical sequence length/runtime. In this work, we re-encode capitalization to allow the perplexity-driven subword segmentation model to learn how to best segment this linguistic feature. Naturally occurring data accurately describes the prevalence of capitalization but underestimates the importance humans ascribe to capitalization robustness. We propose data augmentation to fill this gap. Overall, we increase translation quality on data with different casings (compared to standard SPM), with minimal impact on decoding speed on standard cased data and large speed improvements on ALL CAPS data.
</p><br>
<p style="text-align: justify;"> <strong>Bio:</strong>
Dr. Huda Khayrallah is a senior researcher at Microsoft, working on the Microsoft Translator team. She holds a PhD in computer science from The Johns Hopkins University (JHU), where she was advised by Philipp Koehn. She also holds a bachelor’s in computer science from UC Berkeley. She has worked on a variety of topics in machine translation and NLP including: low resource MT, noisy data in MT, domain adaptation, chatbots, and more.
</p><br>
<br>
</div> -->
<!-- <div class="table-responsive">-->
<!-- <table class="table">-->
<!-- <tbody>-->
<!-- <tr align="center">-->
<!-- <td>-->
<!-- <a href="#Alessandro" data-toggle="collapse">-->
<!-- <figure>-->
<!-- <img src="images/speaker/Alessandro Moschitti.png" class="hover2" height="185">-->
<!-- <figcaption><h5>Alessandro Moschitti</h5></figcaption>-->
<!-- </figure>-->
<!-- </a>-->
<!-- </td>-->
<!-- <td>-->
<!-- <a href="#Lidong" data-toggle="collapse">-->
<!-- <figure>-->
<!-- <img src="images/speaker/lidong_bing.jpeg" class="hover2"-->
<!-- height="185">-->
<!-- <figcaption><h5>Lidong Bing</h5></figcaption>-->
<!-- </figure>-->
<!-- </a>-->
<!-- </td>-->
<!-- </tr>-->
<!-- </tbody>-->
<!-- </table>-->
<!-- </div>-->
<!-- <div id="Alessandro" class="collapse" data-parent="#accordion">-->
<!-- <strong>Title: </strong>Retrieval-Augmented Large Language Models for Personal Assistants-->
<!-- <br>-->
<!-- <strong>Speaker: </strong><a href="https://www.linkedin.com/in/alessandro-moschitti-10999a4/">Alessandro Moschitti</a> -->
<!-- <br>-->
<!-- <br>-->
<!-- <p style="text-align: justify;"> <strong> Abstract: </strong>-->
<!-- Recent work has shown that Large Language Models (LLMs) can potentially answer any question with high accuracy, also providing justifications of the generated output. At the same time, other research work has shown that even the most powerful and accurate models, such as ChatGPT 4, produce hallucinations, which often invalidated their answers. Retrieval-Augmented LLMs are currently a practical solution that can effectively solve the above-mentioned problem. However, the quality of grounding is essential in order to improve the model, since noisy context deteriorates the overall performance. In this talk, we present our experience with Generative Question Answering, which uses basic search engines and accurate passage rerankers to augment relatively small language models. Interestingly, our approach provides a more direct interpretation of knowledge grounding for LLMs.-->
<!-- </p><br>-->
<!-- <p style="text-align: justify;"> <strong>Bio:</strong>-->
<!-- Dr. Alessandro Moschitti is a Principal Research Scientist of Amazon Alexa AI, where he has been leading the science of Alexa information service since 2018. He designed the Alexa QA system based on unstructured text and more recently the first Generative QA system to extend the answer skills of Alexa. He obtained his Ph.D. in CS from the University of Rome in 2003, and then did his postdoc at The University of Texas at Dallas for two years. He was professor of the CS Dept. of the University of Trento, Italy, from 2007 to 2021. He participated to the Jeopardy! Grand Challenge with the IBM Watson Research center (2009 to 2011), and collaborated with them until 2015. He was a Principal Scientist of the Qatar Computing Research Institute (QCRI) for five years (2013-2018). His expertise concerns theoretical and applied machine learning in the areas of NLP, IR and Data Mining. He is well-known for his work on structural kernels and neural networks for syntactic/semantic inference over text, documented by more than 330 scientific articles. He has received four IBM Faculty Awards, one Google Faculty Award, and five best paper awards. He was the General Chair of EACL 2023 and EMNLP 2014, a PC co-Chair of CoNLL 2015, and has had a chair role in more than 70 conferences and workshops. He is currently a senior action/associate editor of ACM Computing Survey and JAIR. He has led ~30 research projects, e.g., with MIT CSAIL. -->
<!-- </p><br>-->
<!-- <br>-->
<!-- </div>-->
<!-- <div id="Lidong" class="collapse" data-parent="#accordion">-->
<!-- <strong>Title: </strong>Research and Implementation of Large Language Models at Alibaba DAMO Academy-->
<!-- <br>-->
<!-- <strong>Speaker: </strong><a href="https://lidongbing.github.io/">Lidong Bing</a>-->
<!-- <br>-->
<!-- <br>-->
<!-- <p style="text-align: justify;"> <strong> Abstract:</strong> -->
<!-- Over the past year, Large Language Models (LLMs) have brought about a significant transformation in the field of Natural Language Processing (NLP) and artificial intelligence (AI). This presentation will provide an overview of the research and practical initiatives carried out by Alibaba DAMO Academy in the domain of LLMs. On the practical front, the team has introduced an LLM called <a href="https://huggingface.co/SeaLLMs/SeaLLM-Chat-13b">SeaLLMs</a>, which demonstrates remarkable capabilities across major languages in the ASEAN region. When compared to models with similar parameter sizes, SeaLLMs has achieved state-of-the-art performance on various datasets, spanning from fundamental NLP tasks to complex general task solving. Additionally, SeaLLMs has been meticulously customized to enhance safety in these languages and improve its understanding of local cultures. On the research side, the presenter will introduce several recent projects undertaken by the team to advance the development of superior multilingual LLMs. These initiatives include the creation of a multilingual evaluation benchmark for LLMs, an extensive investigation into multilingual jailbreak, a framework that enhances LLMs by incorporating adaptive knowledge sources, a method for extending context length in pretraining, and a framework aimed at making LLMs more effective for low-resource languages. Lastly, the presenter will offer insights into the directions that the team will investigate in the near future. Additionally, he will provide information about career opportunities at DAMO Academy.-->
<!-- </p><br>-->
<!-- <p style="text-align: justify;"> <strong>Bio:</strong> -->
<!-- Dr. Lidong Bing is the director of the Language Technology Lab at DAMO Academy of Alibaba Group. He received a Ph.D. from The Chinese University of Hong Kong and was a postdoc research fellow at Carnegie Mellon University. His research interests include various low-resource and multilingual NLP problems, large language models and their applications, etc. He has published over 150 papers on these topics in top peer-reviewed venues. Currently, he is serving as an Action Editor for Transactions of the Association for Computational Linguistics (TACL) and ACL Rolling Review (ARR), as well as an Area Chair for AI conferences and Associate Editors for AI journals. </p><br>-->
<!-- <br>-->
<!-- </div>-->
</div>
</div>
</div>
</section>
<!--
<section id="panelists" class="bg-light">
<div class="container">
<div class="row">
<div class="col-lg-8 mx-auto">
<h3>Panel Discussions: Ethics in AI</h3>
<p>
The following speakers have accepted to serve as panelists for the panel discussion at SSNLP 2023.
You can view their detailed
information by clicking the images. Eduard Hovy and other academic speakers will also be discussants
on the panel (TBC).