- Newsroom
SenseTime Upgrades the “SenseNova” Foundation Model Sets, Revolutionizing Industries with Rapid Iteration
July 7, 2023, Shanghai – SenseTime, a strategic partner of the World Artificial Intelligence Conference (WAIC) 2023, unveiled comprehensive upgrades to its “SenseNova” foundation model sets, which include an impressive array of new products and applications of large models, at the “AI+: Regeneration” forum during the WAIC 2023. During the event, SenseTime introduced and showcased application practices for its large-scale model technology with various industry parties, including the latest smart cabin products and V2X vehicle-road-cloud synergy transportation system developed by SenseAuto, as well as additional applications for finance, healthcare, E-commerce, smart terminals, industrial parks, and other industries.
At the product launch, Dr. Xu Li, Chairman and CEO of SenseTime, said, “The breakthrough in large models has ignited a new wave of technological revolution in AI, leading to an explosive growth in industry demand and the emergence of new application scenarios and formats. SenseTime hopes to enhance AI infrastructure capabilities through its strategy centered on ‘Large model + AI Infrastructure’, not only to develop a more powerful foundational model with comprehensive capabilities, but to also efficiently integrate expertise in different vertical fields to build a professional large model with a deeper understanding of industries and expertise. Ultimately, we are looking to reduce the cost and barriers of downstream applications of large models in various industries, enabling their value to be fully realized.”
Dr. Xu Li, Chairman and CEO of SenseTime
In accordance with its artificial general intelligence (AGI) strategy centered on “Large models + AI Infrastructure”, SenseTime's “SenseNova” foundation model sets are undergoing rapid iteration. As a natural language processing (NLP) model with hundreds of billions of parameters, SenseChat 2.0 has overcome the input length restrictions of large language models. It has introduced model versions with different parameter magnitudes that can seamlessly adapt to various application scenarios and endpoints, such as mobile devices and the cloud, thereby reducing deployment costs. SenseTime's proprietary generative large model, SenseMirage 3.0, has upgraded its parameter count from 1 billion to 7 billion since its initial release in April this year, enabling the model to produce photos with professional-level detail.
Furthermore, SenseAvatar 2.0 has improved speech and lip-syncing fluency by over 30% and supports 4K video effects. It also supports AIGC image generation and digital singing capabilities. SenseSpace 2.0 has increased reconstruction efficiency by 20% and renders performance by 50%, enabling completion of mapping a 100-square-kilometer scene in just 38 hours with the support of 1200 TFLOPS/second computing power. Additionally, SenseThings 2.0 reproduces the texture and material of small objects with millimeter-level precision, overcoming the challenge of capturing highly reflective and mirror-like objects.
SenseTime Upgrades the “SenseNova” Foundation Model Sets
“SenseNova’s” integrated multimodal capabilities empower all industries
SenseTime is leveraging “SenseNova’s” ability for rapid iteration in fundamental technologies to actively drive industry development. By combining the multimodal capabilities of large models, SenseTime is making significant breakthroughs in various industries.
In the financial sector, SenseTime is collaborating with banks, insurance companies, brokers, and other clients to utilize digital humans for intelligent customer service, smart marketing, and other tasks. With access to large language model (LLM) capabilities, SenseTime is offering new functions such as investment research analysis and research report writing, which can help reduce costs and increase efficiency. Furthermore, by integrating with a financial knowledge base, the system can provide content-driven question and answer outputs that are 100% based on the customer's product description, while also ensuring that information is updated in real-time.
In the healthcare sector, SenseTime has developed a Chinese medical language large model that is built on a vast amount of medical knowledge and clinical data. This model provides multi-scenario and multi-round conversation capabilities for health consultation and outpatient procedures. It can continuously enhance its medical language understanding and reasoning abilities, thereby empowering hospitals to improve their diagnosis and treatment efficiency, along with their patient service standards. Additionally, it will support the comprehensive analysis of various types of medical data, including images, texts, and structured data, using a multimodal approach.
By combining SenseChat 2.0 and SenseMirage 3.0, SenseTime offers mobile users a variety of intelligent solutions, including question-and-answer interactions for information retrieval, knowledge exchange for daily scenarios, and content interactions using language and images. Thanks to the lightweight versions of SenseTime's large models, these solutions can easily be deployed and run on mobile devices. In addition, SenseTime launched “The Three-Body Problem: Beyond Gravity”, an immersive experience space based on the award-winning sci-fi novel by Liu Cixin. This futuristic sci-fi journey is a testament to SenseTime's ability to breakthrough the boundaries of imagination using its large model capabilities.
Aimed at in-person settings, SenseTime leverages its large model capabilities to offer intelligent solutions for power grid inspections, including the identification of obscure faults and the assessment of intricate defects. SenseSpace 2.0 creates digital twins of real-world settings, allowing for improved operational management in the Mashan town area in Jinan, the China Vision Park in Hefei, and the Ruijin Hospital in Shanghai. Furthermore, by using SenseThings 2.0, SenseTime is able to create 3D digital versions of jewelry and accentuate product craftsmanship, thereby enhancing customers' shopping experiences. SenseTime's digital humans generated with SenseAvatar 2.0 are widely used on short video and live streaming platforms. The Company has partnered with leading companies to establish an ecosystem for “Cloud + AIGC + Short video/Live streaming”, offering an efficient, cost-effective, and user-friendly AI video and marketing tool.
In the field of intelligent automobiles, SenseTime is pushing the boundaries of smart cabins, intelligent driving, and vehicle-road synergy with the support of large models. Through multimodal fusion including vision and audio, SenseTime’s smart cabin software can perceive user needs and provide personalized services based on labeled data that records user habits and preferences. SenseTime's large models make it possible to better understand users and interact with them through customizable digital humans. This integration provides a comprehensive experience that combines safety, entertainment, education, and efficiency.
Empowered by the “Large model + AI Infrastructure’s” powerful capabilities, SenseAuto has deployed edge-cloud collaboration, unified traffic entry, and supported private deployment and application needs of tens of millions. At the Computer Vision and Pattern Recognition Conference (CVPR) 2023, SenseTime and its joint labs proposed the industry’s first end-to-end autonomous driving foundation model, UniAD, which received the Best Paper Award for Research, presenting a significant breakthrough in autonomous driving. Based on this research, SenseTime is building a vehicle-road-cloud collaborative traffic system, developing a large-scale road-side visual perception model with its multimodal and multi-task model. This system is created by combining SenseSpace 2.0 and SenseThings 2.0 to build intelligent traffic twins and simulations, and leveraging the perception reasoning and human-machine interaction capabilities of SenseChat 2.0 to promote the evolution of large model conversational interactions between vehicles, roads, and the cloud.
These new intelligent technologies with the large-scale computing power and large models strengthen SenseTime’s long-term competitiveness and innovation cornerstones in the AGI era. In addition to introducing the multi-task foundation model sets, the advancements also lay a new path for long-term developments in basic scientific research, innovation, and large-scale application of generative AI. Looking ahead, the fundamental value of large model is to reconstruct productivity models and drive innovations so that AI can be implemented on an industrial scale. With the Company’s ongoing breakthroughs in R&D and technology empowerment, SenseTime will continue its innovation efforts in this highly dynamic AGI era, embrace new changes, take initiatives to innovate and facilitate intelligent development.
You haven't completed the information
After completing the information,
you can download the information