演講摘要

共同議程

Keynote數位工程之模型化設計:影響與方向指引Model-Based Design for Digital Engineering: Impact and Directions

John Wang
MathWorks

09:30~10:10

超過二十年來,以模型為基礎的設計(Model-Based Design)被視為複雜系統設計之可靠且強大的架構。如今,工程師面臨受到軟體開發趨勢、AI技術整合、以及雲端運算等因素相互衝擊碰撞而生的新工作流程。透過本段演講了解MathWorks如何投資在以模型為基礎的設計,藉此提供關鍵技能來因應自動化設計任務、預防缺陷、以及擴展至複雜度遽增的數位系統。

Model-Based Design has been a reliable and powerful framework for designing complex systems for more than two decades. Today’s engineers are confronting new workflows influenced by trends in software development, the integration of AI, and cloud computing. Learn how MathWorks is investing in Model-Based Design to deliver critical capabilities to automate design tasks, prevent defects, and scale to increasingly complex digital systems.

MATLAB & Simulink R2024a/b 最新功能What’s New in MATLAB & Simulink R2024a/b

10:10~10:40
摘要

分場 A : 無線通訊、高速傳輸與晶片開發

A1使用MATLAB推動無線通信開發–5G和NTN應用實例Advancing Wireless Communications Development using MATLAB – 5G and NTN Practices

Dr. Hsi-Cheng (Jacky) Wang
MediaTek Inc.

11:20~12:00
摘要

隨著3GPP產業的快速演進,無線通信的開發需要創新與高效的流程來加速推動。我們探討了使用MATLAB在5G和非地面網絡(NTN)應用中的實例,展示了如何透過先進工具來加速無線通信技術的發展。

本段演講將討論:
  • 5G信號驗證的重要性,並通過兩個案例研究進行詳細說明 - 第一個案例研究探討了5G下行鏈路(DL)和上行鏈路(UL)信號的驗證過程,確保信號的準確性和可靠性。第二個案例研究則介紹了5G誤差向量幅度(EVM)測量儀在射頻(RF)元件中的應用,強調了精確測量對於元件性能評估的重要性。
  • 在NTN研究方面,我們將探討TN/NTN系統的干擾評估,並通過使用星曆數據進行衛星多普勒頻移建模,展示了如何應對衛星通信中的挑戰。

With the rapid evolution of the 3GPP industry, the development of wireless communication necessitates innovative and efficient processes to drive progress. This presentation explores the use of MATLAB in 5G and Non-Terrestrial Network (NTN) applications, showcasing how advanced tools can accelerate the development of wireless communication technologies.

This presentation will cover
  • The importance of 5G signal validation, illustrated through two case studies. The first case study examines the validation process of 5G downlink (DL) and uplink (UL) signals, demonstrating how to ensure signal accuracy and reliability. The second case study introduces the application of a 5G Error Vector Magnitude (EVM) meter in Radio Frequency (RF) components, highlighting the significance of precise measurements for component performance evaluation.
  • In the realm of NTN research, we studied the interference evaluation of TN/NTN systems and demonstrate how to address challenges in satellite communication through satellite Doppler shift modeling using ephemeris data.

A2混合訊號設計最佳化的嶄新方法New Methodology of Mixed-Signal Design Optimization

Scott Li
Cadence

13:10~13:50
摘要

With the process moving forward, how to optimize a Mixed-Signal design efficiently becomes more and more important. To achieve this goal, EDA tools play a key role and had a lot of new methodology to support it. This session will introduce brand new Virtuoso optimization solution, AOP (Advanced Optimization Platform), and MATLAB optimization solution which leveraging Virtuoso/MATLAB integration.

A3創新的FPGA/SoC每秒千兆次取樣(GSP/S)訊號處理解決方案Innovative Gigasamples-per-Second Signal Processing Solution for FPGA/SoC

Kishore Siddani
MathWorks

14:00~14:40
摘要

在本段演講,我們將探索與運用在5G NR、雷達、以及訊號智能等尖端科技的高速資料流處理有關的挑戰和解決方案。隨著類比到數位轉換器(analog-to-digital converters,ADCs)的進展,催生了新的DSP演算法架構以滿足這些應用對性能表現的嚴格要求。我們將鑽研有效的建模策略,探索並且為各種DSP演算法進行硬體架構模擬。除此之外,我們將討論產生可合成的VHDL及Verilog程式碼的方法,協助FPGA與SoC平台的無縫部署實現。

本段演講將討論 :
  • 運用以模型為基礎的設計方法來對DSP演算法建模與模擬,以有效地將FPGA/ASIC平台作為實現目標。
  • 利用數位降頻轉換(Digital Down Conversion,DDC)範例來檢查並促成有效率的基於取樣與基於框架的處理。
  • 使用最佳的實踐方式分析並改善硬體設計,以將延遲、吞吐量、與資源使用最佳化。
  • 產生可讀、可合成的VHDL和Verilog程式碼來進行FPGA部署。
  • 使用DSP HDL IP Designer APP輕鬆為DSP演算法產生HDL程式碼和DPI-C測試平台。

In this talk, we will explore the challenges and solutions associated with processing high-speed data streams in cutting-edge applications such as 5G New Radio (NR), radar, and signal intelligence. With advancements in analog-to-digital converters (ADCs), new architectures of DSP algorithms are emerging to meet the rigorous performance requirements of these applications. We will delve into effective strategies for modeling, exploring, and simulating hardware architecture for various DSP algorithms. Additionally, we will cover methods for generating synthesizable VHDL and Verilog code, enabling seamless deployment on FPGA and SoC platforms.

During this session, you will discover how to:
  • Apply Model-Based Design methodology to model and simulate DSP algorithms for effectively targeting FPGA/ASIC platforms.
  • Utilize a Digital Down Conversion (DDC) example to examine and enable efficient sample-based and frame-based processing.
  • Analyze and enhance hardware designs using best practices to optimize latency, throughput, and resource usage.
  • Generate readable, synthesizable VHDL and Verilog code for FPGA deployment.
  • Easily generate HDL code and DPI-C testbench for DSP algorithms using DSP HDL IP Designer App

A4硬體/軟體應用之設計、分析與SoC裝置部署Design, Analyze and Deploy Hardware/Software Applications to SoC Devices

Kishore Siddani
MathWorks

15:10~15:50
摘要

半導體產業的快速發展催生了Versal ACAPs、MPSoC與RFSoCs等系統單晶片(Systems-on-Chip,SoC)平台以供複雜應用的原型化。同時間,在功能創新、緊迫的時程與產品品質等競爭的需求大大衝擊傳統的SoC原型化工作流程。也因此在真正接觸到實際硬體之前的設計決策愈來愈顯重要。本段演講將向你展示如何在SoC架構下建立模型及模擬演算法,藉此以真實的硬體框架來評估性能瓶頸。除此之外,我們將探討流暢的完整系統硬體實現工作流程。

本段演講要點包含:
  • 使用SoC Blockset提供的模塊為應用建立SoC模型,藉此定義ARM處理器核心、硬體邏輯、記憶體和週邊設備的介面。
  • 以包含RFSoCs之RF資料轉換器的SoC架構模擬演算法,以評估介面頻寬,處理瓶頸與其他設計權衡。
  • 使用HDL IP Importer將既有的HDL IP核心匯入並整合至SoC模型。
  • 建立任務執行、多核心執行、以及任務中斷的模型,以定義詳細處理器功能。
  • 使用SoC Builder app從SoC模型產生完整系統實現,包含執行於FPGA的位元流(bitstream)及執行於處理器的執行檔。

The semiconductor industry is rapidly evolving bringing out heterogenous Systems-on-Chip (SoC) platforms like Versal ACAPs, MPSoC and RFSoCs for prototyping complex applications. At the same time, the competing demands for functional innovation, aggressive schedules and product quality have significantly strained traditional SoC prototyping workflows. It is becoming increasingly important to make design decisions before touching the actual hardware. This talk will show you how you can model and simulate algorithms with SoC architecture to evaluate performance bottlenecks in actual hardware context. Additionally, we will cover streamlined workflow for complete system implementation on hardware.

During this session, you will discover how to:
  • Create an SoC model of an application by using blocks from SoC Blockset for defining interfaces between ARM processor cores, hardware logic, memory and peripherals.
  • Simulate algorithms with SoC architectures, including RF Data Converter for RFSoCs, to evaluate interface bandwidth, processing bottlenecks and other design tradeoffs.
  • Import and integrate existing HDL IP core into SoC model using HDL IP Importer.
  • Model task execution, multicore execution, and task interrupts to define detailed processor functionality.
  • Use SoC Builder app to generate complete system implementation from the SoC model including bitstream for FPGA and software executable for processor.

A5使用類神經網路進行毫米波通訊波束選擇最佳化Optimizing Beam Selection in Millimeter Wave Communications Using Neural Networks

Tim Yeh
TeraSoft

Michael Chen
TeraSoft

16:00~16:40
摘要

本演講將探討如何運用類神經網路來優化毫米波(mmWave)通訊中的波束選擇過程,降低系統複雜度並提升性能表現。隨著毫米波技術成為5G及未來6G系統的關鍵技術,傳統的波束管理方法難以滿足高頻通訊的需求。本議程將帶領參與者了解人工智慧在波束選擇中的應用,展示實際範例並討論AI在無線通訊中的未來發展。

This presentation will explore how neural networks can optimize the beam selection process in millimeter wave (mmWave) communications, reducing system overhead and enhancing performance. As mmWave technology becomes crucial in 5G and future 6G systems, traditional beam management methods struggle to meet the demands of high-frequency communication. This agenda will guide participants through the application of artificial intelligence in beam selection, showcasing practical examples and discussing the future of AI in wireless communication

分場 B : 能源轉型、電動車與自主系統

B1應用Model Based Design於馬達控制軟體的標準化開發流程The Standardized Development Process of Motor Control Software with Model Based Design

Matt Ser
Delta Electronics

11:20~12:00
摘要

隨著現代車用軟體的規模和複雜度都在急劇上升,車廠對於供應商軟體設計品質與要求也日趨嚴格,為應對複雜系統的挑戰,應用Model Based Design的設計方法於車用領域一直是關鍵的軟體開發解決方案,並用以滿足開發流程與標準,如;ASPICE、ISO26262等等…。

有鑑於此,本演講將介紹如何將Model Based Design的設計方法應用於車用Traction Inverter Motor Control軟體開發上,建立模型開發與整合平台,應用物理建模有效地模擬系統行為來提高軟體元件開發效率與品質,並將Model Based Design的系統化設計與驗證方法融入ASPICE的標準化設計流程中,使軟體設計更好地滿足對高品質、標準化開發的需求。

B2透過基於學習的自適應控制方法進行智慧型速度控制系統開發與測試Development and Testing of Intelligent Speed Control Systems via Learning-based Adaptive Control Approach

Bobson Wu吳啓禾
國立臺北科技大學

13:10~13:50
摘要

近年來全球車廠皆致力於發展兼具安全性和方便性之車款,各車廠的新型車輛都持續導入更完整的先進駕駛輔助系統(ADAS),然而ADAS目前技術發展中仍面臨多重挑戰,包括感測器融合與可靠性問題、AI模型的訓練數據需求與可解釋性、多樣化駕駛場景的處理、法規標準的統一、駕駛員與系統的互動等。有鑒於此,本文針對ADAS中的速度輔助系統(SAS)進行研究,開發一套智慧型速度控制解決方案,其中包含自動緊急剎車、自適應巡航控制、以及能夠針對鄰近車道車輛變換車道時的情況進行車速調控。在速度輔助控制策略以人類駕駛行為為基礎進行訓練,結合自適應模糊邏輯系統來進行控制,根據當前的駕駛情境選擇適當的控制模式,當偵測到潛在的碰撞風險時,會觸發自動緊急煞車系統。進而,該系統利用前車的相對縱向距離與相對縱向車速等資訊,將這些數據回饋給自適應模糊推理系統(ANFIS),此系統能有效減緩因鄰近車道車輛危險的切入行為所引發的減速反應,並針對與此目標車輛的相對距離及相對車速進行調整,確保本車在不同車速下能夠保持安全且舒適的車輛跟隨功能,達到更佳的駕駛輔助效果。整個智慧型速度系統的開發過程中,在MATLAB / Simulink的軟硬體架構下完成一整套的開發流程,徹底發揮MATLAB功能強大的Toolbox來輔助設計,以及善用Simulink圖形化語言的優點。經過軟體模擬與高架橋等工況進行實車測試,實驗結果顯示本文所發展智慧型速度控制策略,包括自動緊急煞車、自適應巡航控制、及自適應模糊推理系統控制,除了確保能夠在緊急狀況下煞停,並能顯著改善由前車緊急切入所引發的減速過大反應,並有效避免潛在的碰撞事故。

In recent years, automotive manufacturers worldwide have been dedicated to developing vehicles that combine safety and convenience. New vehicle models from these manufacturers continue to incorporate more comprehensive Advanced Driver Assistance Systems (ADAS). However, the ongoing development of ADAS technology faces numerous challenges, including sensor fusion and reliability issues, the need for extensive training data for AI models and their interpretability, handling diverse driving scenarios, standardizing regulations, and ensuring effective driver-system interaction. Considering these challenges, this paper focuses on the Speed Assistance System (SAS) within ADAS, proposing an intelligent speed control solution. This solution integrates automatic emergency braking, adaptive cruise control, and speed adjustment in response to lane-changing maneuvers in adjacent lanes. The speed assistance control strategy is trained based on human driving behavior and employs an Adaptive Neuro-Fuzzy Inference System (ANFIS) for control, selecting the appropriate control mode according to the current driving situation. When a potential collision risk is detected, the system triggers automatic emergency braking. Additionally, the system utilizes data such as the relative longitudinal distance and speed of the preceding vehicle, feeding this information back into ANFIS. This feedback loop effectively mitigates the deceleration response caused by aggressive lane changes from adjacent vehicles and adjusts the relative distance and speed to ensure safe and comfortable vehicle following at various speeds while enhancing overall driving assistance. The entire development process of the smart speed control system was conducted within the MATLAB/Simulink framework, leveraging the powerful toolbox features of MATLAB for design assistance and the advantages of Simulink’s graphical programming language. Software simulations and real-vehicle tests on elevated roads demonstrate that the proposed smart speed control strategy, which includes automatic emergency braking, adaptive cruise control, and ANFIS-based control, not only ensures autonomous braking in emergency but also significantly improves the deceleration response to sudden lane changes by leading vehicles, effectively preventing potential collisions.

B3工業/家用儲能系統發展與逆變器測試方案999

Evan Tsai
Chroma

14:00~14:40
摘要

Chroma 儲能逆變器測試方案針對GB/T 34120、GB/T 34133、IEC62933、SGSF-04等儲能系統相關法規及分佈式能源併網互聯規範UL1741, IEEE1547、德國低電壓併網標準VDE-AR-N 4105、紐澳逆變器能源系統併網標準AS/NZS 4777.2,共歸納了5類共20多餘項的逆變器測試要求。包含併網測試、性能 、輸出輸入特性測試、保護特性測試及光伏特性測試等。 其中,因應VDE標準的短路電流量測要求,Chroma提供專業的測試手法,可避免待測物或是設備模擬器先保護的爭議,以及接電網短路造成跳電的風險,準確量測待測物最大輸出短路電流技術。此測試方案是以Chroma 8000 ATS為基礎,搭配Chroma 61800、62000D、17040等電網/電池模擬器等系列產品與量測設備,使用者只需要確定測試條件和規格,即可以優化的PCS系統測試項目進行逆變器自動化測試。

B4使用MATLAB與Simulink進行無人航空載具開發Developing Unmanned Aerial Vehicles (UAV) with MATLAB and Simulink

Sarah Hung
TeraSoft

15:10~15:50
摘要

此演講分成三個部分,第一是介紹工具所提供的不同精確度的UAV Models以及Sensor Mdoels,來輔助在各個開發時期的演算法模擬及測試。第二部分則是介紹UAV演算法,包括感知、路徑規劃及控制。第三部分展示UAV的模擬環境,像是Cuboid Scenario模擬環境,可透過UAV Scenario Designer APP快速進行設計,此外MATLAB也可結合Unreal Engine進行模擬。最後則是演算法的實現,您的演算法可被部署於MCU, GPU, FPGA及PX4目標硬體上。

B5基於模型的設計如何進行CI/CD的整合Model Based Design with CI/CD

Kary Chang
TeraSoft

16:00~16:40
摘要

DevOps(Development and Operations)的概念是為了讓整體開發工作能夠同時兼顧速度與品質,所產生的一套合作方式。透過開發人員與IT人員的協同運作、整合技術、流程自動化,可有效提升組織間的合作效率及產品品質。而CI/CD的流程也是因應此概念而產生的。透過持續整合、持續部署的方式,在開發階段就自動協助開發人員偵測程式碼問題,並部署至伺服器,使得軟體的品質能夠進一步的提升。此演講將探討基於模型設計MBD如何導入CI/CD的流程。

分場 C : 人工智慧應用與實踐

C1使用MATLAB進行半導體和電子產品的自動視覺檢測Automated Visual Inspection for Semiconductor and Electronics using MATLAB

John Wang
MathWorks

11:20~12:00
摘要

半導體製造產業的工程師透過計量學、瑕疵檢驗、以及故障分析來完善產能、品質與可靠度。在這之中涉及自不同製造階段擷取出來的影像的分析。
MATLAB可幫助你應用傳統影像處理/電腦視覺技巧,或者使用AI方法來自動化影像分析來進行瑕疵檢測、故障分析或計量分析。
本段演講我們將以晶圓瑕疵偵測為例,綜觀典型的步驟、技巧,以及分析影像、偵測物件、執行量測等工作流程所適用的工具。

演講要點:
  • 資料存取與影像預處理技巧
  • 語義分割和瑕疵、異常標記
  • 使用深度學習技巧進行瑕疵偵測與網路決策視覺化
  • 原型與產品部署

Semiconductor manufacturing engineers perform metrology, defect inspection, and failure analysis to optimize yield, quality, and reliability. This involves analyzing images captured across different stages of manufacturing.
MATLAB enables you to apply traditional image processing/computer vision techniques as well as AI approaches to automate image analysis for defect inspection, failure analysis, or metrology.
In this session, we will use a wafer defect detection example to provide an overview of the typical steps, techniques, and tools of a workflow to analyze images, detect objects, and perform measurements.

Highlights
  • Data access and image preprocessing techniques
  • Semantic segmentation and labeling of defects and abnormalities
  • Defect detection and visualizing network decisions using deep learning techniques
  • Deployment for prototyping and production

C2Development of Lightweight Deep Learning Single-Lead ECG Algorithms: Verification and Accurate Identification of Atrial Fibrillation Patients and Sleep Apnea Events in REM/NREM Sleep

Febryan Setiawan, Ph.D.
國立成功大學

13:10~13:50
Highlights

The dissertation introduces novel methodologies that employs advanced deep learning techniques to detect obstructive sleep apnea (OSA) by analyzing single-lead ECG data, focuses on identifying OSA patterns during REM and non-REM (NREM) sleep stages and explores their relationship with atrial fibrillation (AF). The main objective is to develop lightweight deep learning systems using single-lead ECG for detection of REM and NREM-related OSA events, while also investigating their association with AF for diagnostic and treatment planning purposes. The research involves creating lightweight deep learning-based OSA detection systems to identify apnea-hypopnea (AH) and non-AH episodes, differentiate OSA and central sleep apnea (CSA) events, recognize REM and NREM-related OSA episodes, and detect AF associated with OSA. This work emphasizes the significance of these innovative approaches in advancing sleep medicine, enhancing diagnostic precision, and enabling personalized interventions for managing health implications linked to OSA.

 

An AH and non-AH episode detection algorithm, which composed of bag-of-features (BoF) extracted from an ECG spectrogram in conjunction with a deep learning framework utilizing the DarkNet-based convolutional neural network (CNN) called Tiny-DarkOSANet, was developed and achieved commendable classification accuracy and exceptional temporal resolution. Validation was conducted using overnight ECG recordings sourced from the PhysioNet Apnea-ECG (PAED) and National Cheng Kung University Hospital Sleep Center database (NCKUHSCD), with total of 83 subjects and average apnea-hypopnea index (AHI) of 29.63 per hour. This research identified the optimal spectrogram window duration by examining 10-second and 60-second intervals and isolated frequency bands to generate BoF, determining the most suitable frequency band for precise AH and non-AH episodes detection. The algorithm achieved impressive accuracies of 91.2% and 85.6%, sensitivities of 90.5% and 85.8%, specificities of 92% and 85.5%, and AUC values of 0.9757 and 0.9371 for different time windows and frequency bands. Additionally, a novel AH and non-AH episode detection algorithm was constructed using 1D deep CNN with empirical mode decomposition (EMD) applied to preprocessed ECG signals. This algorithm was validated with 33 overnight ECG recordings from the PAED. The 1D deep CNN model demonstrated high accuracy in segment-level classification with 93.8% accuracy, 94.9% sensitivity, and 92.7% specificity through 5fold-CV, and subject-level classification with 83.5% accuracy, 75.9% sensitivity, and 88.7% specificity validated via leave-one-subject-out cross-validation (LOSO-CV). Furthermore, the model was successfully deployed on the NVIDIA Jetson Nano-embedded platform, achieving 92.5% accuracy with a mere 1 MB parameter size.

The various SA events detection system to distinguish normal breathing (NB), OSA, and CSA was proposed utilizing 1D single-lead ECG signals and 2D single-lead ECG continuous wavelet transform (CWT) spectrograms combined with a vision transformer-based classification model called ViSATNet. This research underscores the importance of differentiating between CSA and OSA for accurate diagnosis and tailored management, thereby improving patient outcomes. The algorithm was validated using overnight ECG recordings from 60 subjects sourced from the Sleep Heart Health Study Visit 1 database (SHHS1). The 1D-ViSATNet model achieved an overall accuracy of 86.1%, sensitivity of 79%, and specificity of 89.7%, while the 2D-ViSATNet model attained an overall accuracy of 96.9%, sensitivity of 95.4%, and specificity of 97.7%.

A system for detecting REM and NREM-related OSA episodes was developed using a DarkNet-based CNN classification model named OSADeepNet-Lite, which incorporates electrocardiomatrix (ECM) feature visualization of heart rate variability. This algorithm was validated with overnight ECG recordings from 50 subjects from the NCKUHSCD database. The system features a sleep monitoring capability with minimal memory usage, real-time functionality, and user-friendliness. It achieved high performance metrics, including an accuracy of 91.42%, sensitivity of 82.83%, and an F1-score of 0.8283. Each event is processed in approximately 0.2 seconds, allowing for the analysis of an entire night's events within 3 minutes.

A system for detecting AF-related OSA was developed using independent component analysis (ICA) and a DarkNet-based CNN deep learning framework called SHHDeepNet. This system was validated with overnight ECG recordings from the SHHS1 database. The algorithm demonstrated exceptional performance in segment-level binary classification with 5-fold cross-validation (5fold-CV), achieving 98.22% accuracy, 96.8% sensitivity, 99% specificity, and an AUC value of 0.9981. In multi-class classification, it achieved 98.36% accuracy, 97.14% sensitivity, 98.77% specificity, and an AUC value of 0.9975 for 5fold-CV. For subject-level classification using leave-one-subject-out cross-validation (LOSO-CV), the algorithm attained 86.42% accuracy, 79.4% sensitivity, 90.2% specificity, and an AUC value of 0.9372 for binary classification, and 86.7% accuracy, 72.6% sensitivity, 90.4% specificity, and an AUC value of 0.9166 for multi-class classification.

In summary, this dissertation successfully developed single-lead ECG-based lightweight deep learning algorithms to detect OSA events and their associated phenotypes. It represents pioneering research in utilizing lightweight DL models for the detection of various types of SA events, including OSA events during non-REM and REM sleep stages, as well as AF-related OSA detection, yielding promising results. The AH and non-AH episode detection systems demonstrated high accuracy and sensitivity, with excellent temporal resolution and low memory usage. As the first pioneering system using ECG signals and a deep learning framework, a comprehensive SA event detection system was successfully developed, achieving balanced classification in distinguishing NB, OSA, and CSA. Additionally, a precise and robust lightweight deep learning algorithm tailored for single-lead ECG was developed and validated, effectively identifying REM and NREM-related OSA events. The dissertation also introduced a novel system for identifying AF-related OSA, which showed superior performance in distinguishing AF and OSA events compared to conventional methods. These technologies enable immediate and continuous analysis of sleep-related breathing behaviors, offering significant potential for disease management and treatment advancements.

C3駕馭AI,落實更快的系統設計:在MATLAB進行模型降階Harnessing AI for Faster System Design: Reduced Order Modeling in MATLAB

Jayanth Balaji Avanashilingam
MathWorks

14:00~14:40
摘要

在以模型為基礎的設計流程中,虛擬模型對於複雜系統的設計佔據相當重要的地位。建立高逼真度的虛擬模型來精確地捕捉硬體行為具有很高的價值,不過這些模型對於整個開發階段來說可能太過耗時且不適當。模型降階(Reduced order modeling,ROM)提供更快速、逼真度稍低的近似物。本段演講將聚焦以AI為基礎的ROM,展示如何進行實驗設計及AI模型訓練,並且將這些AI模型整合至Simulink模擬以進行嵌入式系統的硬體迴圈測試或虛擬感測器應用。本段演講也將討論各種ROM方法的優缺點,幫助你選擇對你的專案最適合的方法。

演講要點:
  • 使用No/Low Code工作流程建立以AI為基礎的降階模型
  • 將經過訓練的AI模習型整合進Simulink以進行系統層級模擬
  • 產生經過最佳化的C程式碼並執行HIL測試

In Model-Based Design, virtual models are crucial for designing complex systems. Creating high-fidelity virtual models that accurately capture hardware behavior is valuable, but these models can be time-consuming and unsuitable for all development stages. Reduced order modeling (ROM) offers faster, lower-fidelity approximations. This talk will focus on AI-based ROM, demonstrating how to conduct a design of experiments and train AI models and integrate these AI models into Simulink simulations for hardware-in-the-loop testing or virtual sensor application in embedded systems. The talk will also cover the pros and cons of various ROM approaches to help you select the best one for your project.

Highlights
  • Creating AI-based reduced order models using the No/Low Code workflow.
  • Integrating trained AI models into Simulink for system-level simulation
  • Generating optimized C code and performing HIL tests

C4以漂移感知的增量學習工作流程提升AI的可操作性Operationalizing AI with Drift Aware Incremental Learning Workflow

Jayanth Balaji Avanashilingam
MathWorks

15:10~15:50
摘要

漂移感知的增量學習(drift-aware incremental learning)被視為一種有助於改善AI模型可操作性的可行方法。這種方法支援了模型因應資料類型演變的即時適應性,確保持續的精確度與性能表現。探索如何使用MATLAB無縫地將漂移感知技巧融入至你的MLOps流程,促進有效率的模型更新和健全的部署。展現合併增量學習與漂移偵測的優勢,進而打造最適模型健康與動態環境的可靠性。

演講要點:
  • 使用低程式碼工作流程開發資料導向模型
  • 在產品級環境進行模型部署
  • 資料漂移的監測
  • 以回饋為基礎的模型再訓練與工作流程的可操作化

Enhancing the operationalization of AI models is possible with drift-aware incremental learning. This method allows for real-time adaptation of models as data patterns evolve, ensuring sustained accuracy and performance. Discover how to seamlessly incorporate drift-aware techniques into your MLOps pipeline using MATLAB, facilitating efficient model updates and robust deployment. Uncover the advantages of merging incremental learning with drift detection to uphold optimal model health and reliability in dynamic environments.

Highlights
  • Developing the Data Driven model using Low code workflow
  • Deploying in the Model in Production Environment
  • Monitoring the Data Drift
  • Feedback based model retraining and operationalizing the workflow

C5MATLAB與大型語言模型應用Large Language Models (LLMs) with MATLAB

Fred Liu
TeraSoft

16:00~16:40
摘要

此演講將探討如何將大型語言模型(LLMs)與 MATLAB 結合的可能性,以及如何使用LLMs加速MATLAB端開發,以及透過LLMs增加與整合提升數據分析與自動化能力,例如如何在LLMs上開發RAG (Retrieval Augmented Generation)功能。LLMs 因其在自然語言處理方面的優勢,逐漸在各個領域中得到應用,以簡化任務、分析大型數據集並增強人工智能驅動的應用。此外,還會討論了整合過程中的挑戰,並對未來在 MATLAB 環境中實現先進 AI 解決方案的前景進行了展望。

This presentation will explore the possibilities of integrating Large Language Models (LLMs) with MATLAB, how to accelerate MATLAB development using LLMs, and how to enhance data analysis and automation capabilities through LLMs integration. One example is the development of Retrieval Augmented Generation (RAG) functionalities on LLMs. LLMs, known for their advantages in natural language processing, are increasingly being applied across various fields to streamline tasks, analyze large datasets, and enhance AI-driven applications. Additionally, the challenges encountered during the integration process will be discussed, along with a forward-looking view on realizing advanced AI solutions in MATLAB environments.

分場 D : 高手養成技術講堂

D1工程設計之模擬模型:透過MATLAB工具達到高精確度與彈性Simulation Models in Engineering Design: Achieving High Fidelity and Flexibility via MathWorks Tools

Shirley Wen
TeraSoft

11:20~12:00
摘要

本段演講以熱交換器建模作為案例研究,提供多樣化建模策略的全面探討,以因應設計過程中不同階段的需求。討論的主要途徑包括:

  • 決定並執行有效的熱阻抗
  • 在 Simscape Fluids 中使用系統層級熱交換器模塊
  • 在 Simscape Fluids 中運用基於幾何的熱交換器模塊進行細節建模

D2基於模型設計之四旋翼機飛行控制系統開發、測試、驗證、與部署999

Yang-Rui Li 李洋銳
國立成功大學

13:10~13:50
摘要

無人機系統是一個具有複雜動力學以及軟硬體高度整合之系統,為了實現不同的飛行目標,可能會有不同的傳感器,如:慣性量測元件、全球衛星導航系統、視覺模組、決策與控制之中控電腦等。這也造成在開發初期,會需要耗費巨大之人力與除錯成本,進行各個功能的整合與交叉測試。有鑑於目前開源無人機軟-硬體的興起,在本研究當中,將會應用 Simulink 中之工具箱,UAV Toolbox Support Package for PX4 Autopilots,來快速地進行控制演算法開發、測試、驗證、與部署。首先,我們會先將無人機的物理模型建置於 Simulink 環境中,根據飛行需求設計對應的飛行控制系統,並進行數值模擬。此一流程稱為軟體在環 (Software-in-the-loop, SIL) 測試,在這個過程中,將會快速地迭代和驗證控制演算法,以符合飛行需求。此一過程有助於減少在真實硬體上測試的風險、成本、以及開發週期。接下來,將會直接部署控制器到實際的 Pixhawk 硬體,並與我們在前一個步驟所建置的數位孿生 (Digital Twin) 模型,進行軟-硬體交叉測試,以確保演算法在實現後,能得到在模擬環境一樣的結果,這個過程我們稱為硬體在環 (Hardware-in-the-loop, HIL) 測試。最後,將會進行實際的飛行試驗,以驗證所開發的控制演算法,是否能在真實世界中運行以完成飛行任務。實驗結果表明,透過此工具箱,可快速實現 SIL-HIL 架構下之演算法快速開發、測試、驗證、與部署,有利於各功能模組之分工與大幅減少中間成本花費。

D3強化學習於控制領域的流程:設計、測試、部署Reinforcement Learning Workflow for Controls: Design, Test, Deployment

Jeffrey Liu
TeraSoft

14:00~14:40
摘要

強化學習因能自動學習複雜控制而受到越來越多的關注,但強化學習也因經常使用神經網絡,相較於傳統的控制理論難以保證系統的穩定性。

在本章節中,我們將介紹如何透過 MATLAB 和強化學習工具箱將強化學習用於控制設計。我們也介紹該工具中可用的一些最新功能,還將介紹強化學習控制器的設計、程式碼產生和部署的完整流程。

Reinforcement learning has been gaining attention as a new control design method that can automatically learn complex control and realize high performance. However, reinforcement learning policies often use deep neural networks, which makes it difficult to guarantee the stability of the system with conventional control theory.

In this session, we will introduce ideas on how to use reinforcement learning for control design with MATLAB and Reinforcement Learning Toolbox. We will cover some of the latest features available in the tool and we will also introduce workflow for the design, code generation, and deployment of the reinforcement learning controller.

D4利用MATLAB APP自動產生報告Automating Report Generation for MATLAB Applications

Claire Chuang
TeraSoft

15:10~15:50
摘要

MATLAB報告產生器非常適合需要定期報告的用例,無論是報告測試結果、認證、合規性、法規等。您也可以透過編寫MATLAB 程式碼來自訂報告並將其製作成範本使用。此外,MATLAB報告產生器支援產出格式豐富的 Microsoft Word、Microsoft PowerPoint®、HTML以及PDF檔案格式 。

MATLAB Report Generator is great for use cases that require periodic reports, whether it is reporting on test results, certification, compliance, regulations, and more. You can also customize reports and work with templates by writing the same MATLAB code you have always been familiar with. In addition, you can automate the creation of richly formatted Microsoft Word, Microsoft PowerPoint, HTML, or PDF reports using the MATLAB Report Generator.

D5訊號完整性進階技巧Advanced Signal Integrity Techniques

David Chang
TeraSoft

16:00~16:40
摘要

本演講專注於訊號完整性的進階技術,特別是利用 MATLAB 的 RF Toolbox 進行 S 參數的時域閘選及利用 Signal Integrity Toolbox 進行腳本控制的最佳化。

透過詳細的示範與解說,參與者將學習如何自動化與最佳化訊號完整性流程,以提升高速數位設計的性能與可靠性。

This session focuses on advanced signal integrity techniques, particularly the use of MATLAB RF Toolbox for time-gating S-parameters and leveraging script-based optimization with the Signal Integrity Toolbox.

Participants will learn advanced techniques for automating and optimizing signal integrity processes to enhance the performance and reliability of high-speed digital designs through detailed demonstrations and explanations.

立即報名