久久久精品2019中文字幕神马_欧美亚洲一区三区_欧美大人香蕉在线_精品88久久久久88久久久_中文字幕一区二区三区在线播放 _精品国产一区二区三区久久影院_欧洲av在线精品_粉嫩av一区二区三区_亚洲欧美一区二区三区极速播放_国产亚洲精品久久

首頁 > 食品

"Chinese-Style" OpenAI's "Crazy" 200 Days: Building Models With PPT While Struggling to Find Use Cases|今日熱議

來源:鈦媒體 時間:2023-06-23 23:37:30

Image source:Visual China

BEIJING, June 23 (TiPost) -- On November 30, 2022, ChatGPT, an artificial intelligence (AI) chatbot product developed by OpenAI, was released.

OpenAI probably did not expect that ChatGPT, originally a product that showcased the GPT"s capabilities to consumers, would attract widespread attention from investors, entrepreneurs, unicorns, big companies, academia, economists, and even the Minister of Science and Technology of China in the past 200 days. Meanwhile, more than 30 tech giants, startups, and institutions such as Google, Microsoft, and Alibaba have entered the fray, sparking discussions and a global AI model "arms race".


(相關(guān)資料圖)

The "China Large AI Model Map Research Report" shows that as of May 28 this year, 79 large models with more than one billion parameters have been released in China. The US and China account for over 80% of the total number of large models released worldwide.

A basic consensus in the industry is that the emergence of ChatGPT marks the beginning of general AI and the turning point for strong AI. It is a significant breakthrough in the field of AI technology innovation and application results, and also a "power plant" for digitalization in the new era.

With the help of ChatGPT, all digital systems and industries are worth redoing in a SaaS (Software as a Service) service manner, which can be accessed by various industries. In the future, more people hope that ChatGPT can make enterprise digital business processes faster, more efficient, and smarter.

However, compared with the commercial cases of ChatGPT used by companies such as Morgan Stanley and Stripe, which were announced by OpenAI and Microsoft, a strange phenomenon has emerged in the domestic "large model war":

Although the technology and product capabilities seem strong, various bugs appear when the product is in the hands of customers. Companies that disclose large model dialogues are all talking about their own strong technical capabilities and scenario-based solutions, and some even disclose cooperation information, but they rarely talk about the process of commercial landing cases in public settings.

An industry insider revealed to TiPost App that a listed company complained during a phone conference about using a multi-billion AI model product developed by a certain internet giant. Despite claiming to be the first product of its kind among global giants and capable of creating PPTs in 3 minutes, the listed company experienced a "crash upon connection" when integrating the large model into its system.

At a recent AI industry application forum in Shanghai, an AI large model entrepreneur even stated that the dozen or so large language models released in the past few months are all very similar. The current situation is that only OpenAI can achieve commercialization of general AI and has the majority of users in the market. Furthermore, domestic AI large language models have yet to reach a commercializable level.

PPT-style large models can seve a variety of industries, but have numerous vulnerabilities when applied commercially

"In my opinion, whether ChatGPT can truly develop in this AI 2.0 wave depends on whether it has a business model and whether customers are willing to pay. No matter how we train large models like GPT, if there are no applications, no scenarios, no paying customers, and no business models, they cannot succeed." On June 2, at a roundtable in Shanghai Lingang, Yao Wei, Executive President of Capitalonline, a cloud computing listed company, stated that commercialization of AI large models is extremely important for industry development.

From the customer"s perspective, enterprises also urgently need generative AI to bring business transformation.

According to a survey of 1,003 small businesses across the United States conducted by entrepreneur service platform GoDaddy, ChatGPT is the most widely used generative AI product in small businesses, with a usage rate of 70%; 38% of respondents have tried generative AI in the past few months; marketing, content creation, and business advice are the top three use cases for generative AI in enterprises; 75% of respondents are very satisfied with the performance of generative AI in their business.

As overseas large models are rapidly changing, OpenAI has launched a pilot subscription service for ChatGPT Plus at $20 per month, and ChatGPT/GPT-4 has opened APIs to developers with prices falling, driving the continuous landing of large models in application scenarios.

Meanwhile, many domestic large models have also been released. Based on the $200 billion generative AI market prospect and the $50 trillion digital economy industry scale, AI large models are expected to have the largest commercial development range in China.

Kai-Fu Lee, Chairman and CEO of Innovation Works, once said that the AI 2.0 era has entered a period of explosive productivity improvement applications, which presents a huge platform opportunity and is China"s first opportunity to participate in platform competition in the AI field.

Specifically, combining information on the development of large-model related enterprises, stock research reports, and Microsoft"s recent disclosure of application scenarios, TiPost App has summarized the main commercial applications of ChatGPT-type products in the following seven industries:

Enterprise Operations: Daily office document writing and organization; marketing conversation robots, market analysis, sales strategy consulting; drafting legal documents, case analysis, legal article sorting; human resources resume screening, pre-recruitment, employee training.Education: Assisting in evaluating student learning situations and providing advice for career planning; customizing learning content based on student situations and interests; initial draft and review of papers; helping low-income countries/families obtain equal education resources through GPT.Gaming/Media: Customized games, dynamically generated NPC interactions, customized plots, open-ended endings; content generation for overseas copywriting, language translation and assisted advertising delivery and operation; live streaming of digital virtual humans; game platform code restructuring; AI-generated replicas.Retail/E-commerce: Monitoring and analyzing public opinion, complaints, and emergencies; writing and delivering brand marketing content; automated inventory management; automatic or completed SKU category selection, quantity, and price distribution; customer shopping trend analysis and insight.Finance/Insurance: Personal financial advisor; summary and initial approval of loan information; identification and detection of fraud risk activities; customer service center analysis and content insight; insurance claims processing and analysis; investor reports/research report summaries.Manufacturing/Automotive: Production plan and supply chain plan status queries; predictive maintenance assistance for production lines; product quality analysis and traceability; full-scene simulation training for autonomous driving and virtual car assistants; online car brand and configuration comparison analysis.Life Sciences: Target discovery and drug efficacy in the research and development phase; medical literature content retrieval, key summary extraction, and relevant regulatory sorting; medical representative training and knowledge base establishment; triage guidance assistant, diagnosis and treatment assistant, postoperative care and rehabilitation assistant.

Furthermore, ChatGPT"s large model and generative AI technology will also be extended to various complex scenarios in the fields of images, videos, digital humans, etc., using massive data resources and algorithms to achieve commercial applications and iterative updates.

OpenAI once conducted a study estimating that at least 50% of job duties in 19% of American jobs would be affected by AI. At least 10% of job duties in 80% of jobs would also be affected, with positions such as mathematicians, accountants and auditors, news analysts, legal secretaries and administrative assistants, and tax preparers being the most susceptible to the impact of large-scale GPT models.

However, the above content is mostly just theoretical, with PPT presentations imagining the intelligent upgrading of AI frontier technology. Although there is much competition in parameter scale during implementation, there are still few large-scale industrial deployments, and subsequent model corrections and iterative evolution progress slowly.

Whether it"s data presenting "nonsense" or inaccurate translations between Chinese and English, insufficient computing power, or high prices, the idea that AI will fully assist with shopping, finance, and manufacturing is one-sided. Submitting to customer applications is not an easy task and may result in conflicts and problems.

For example, at a press conference held by iFlytek, when asked a professional medical question, "What drug should be used to treat closed-angle glaucoma?" ChatGPT responded with atropine. TiPost App also attempted to answer this question, but the responses were mostly incorrect or not drugs that can be purchased under Chinese drug regulations. The correct answer is pilocarpine.

Additionally, because large-scale models often use English data instead of Chinese internet data, language barriers could arise. For example, when "fish-flavored shredded pork" is inputted, an awkward image of a live fish cut into shreds may appear, resulting in some problems during commercialization.

A senior executive at a financial company previously told TiPost App that due to ChatGPT"s poor math calculation abilities and inability to update some information in real-time, domestic large-scale model products are not very effective in the financial sector. There may be errors in exchange rate and loan information, and information asymmetry may also occur.

At the April release conference held by AI company Fourth Paradigm, a representative from a bank mentioned that information asymmetry in the financial industry could lead to higher interest rates for credit products or deposit rates, but even if all information is provided, the choices made may not be optimal.

"We are in finance, serving the public, and any information I convey must be accurate." The aforementioned representative believes that the challenges facing large models in enterprise landing mainly include content credibility risk, data security risk, and high landing costs.

In manufacturing, content issues may have a more serious impact. Because human defects and errors are strictly limited, some high-precision processes require accuracy to the slightest degree. Once an AI system makes a mistake, it may cause an accident.

Zhou Bowen, Huixian Lecturer at Tsinghua University and founder of AI company XuanYuan Tech, told TiPost App that large models like ChatGPT are progressing rapidly, but the problem is that they may be seriously "talking nonsense". Especially in professional fields, outsiders see it as an insider, and insiders see it as an outsider. At the same time, the original author of the content regards it as plagiarism, but ordinary users regard it as creativity. In fact, it does not yet have the originality of ideas.

In addition, language issues also need attention.

According to Wired, at least 15 arXiv research papers this year have explored the multilingualism of large models. However, researchers found that AI systems, including ChatGPT, are better at translating other languages into English, and it is difficult to rewrite English into other languages, especially Korean and non-Latin scripts. Moreover, ChatGPT performs poorly in answering factual questions or summarizing non-English complex texts, and is more likely to fabricate information.

At a US congressional hearing held in May, OpenAI CEO Sam Altman said that the ChatGPT development team is taking measures to narrow the language gap. He hopes to work with governments and other organizations to obtain datasets to enhance ChatGPT"s language skills and answer correct content.

Little mention of business cases, and some large models encounter obstacles in application

Industry insiders told TiPost App that GPT has already achieved true intelligence, and the next successful point is in the productization, commercialization, engineering, and application scenarios of large models.

According to a report by Zhuo Shi Consulting, the global AI market is expected to reach $199.7 billion in 2022, with a compound annual growth rate of 29.4%, and is expected to reach $562.4 billion in 2027, with a compound annual growth rate of 23.0% from 2022 to 2027.

"Today, the AI technology capability is vastly different from 5 months ago. We have put a stronger product on our system platform, and as for sales and service, from a business perspective, it has just begun. It takes time from exposure to new technology to final procurement," said Huang Wei, founder and CEO of iFlytek, to TiPost App. The large model has just been released, and there are no large-scale commercial cases yet.

The scarcity of commercial cases is an important feature of the current domestic large model trend in AI. Even the AI industry giant, SenseTime, recently disclosed a figure of only 10+ large model customers, most of which are not vertical leading companies, according to TiPost App.

On May 30th, the generative AI (AIGC) company, UM Question, submitted its prospectus to the Hong Kong Stock Exchange.

The report shows that in 2022, UM Question"s total revenue will be 500 million yuan, and the top five customers are basically AIoT (Internet of Things) companies, most of which purchase UM Question"s smart IoT solutions such as smart watches. The customer cooperation has exceeded three years, rather than its AIGC business services.

As early as late April, UM Question announced the launch of the AI large model "Sequence Monkey" and began internal testing and exploration. Li Zhifei, founder and CEO of UM Question, told TiPost App that UM Question does not need external financing to support R&D investment. Apart from large models, the company"s other businesses do not burn much money. "(Large models) may be the last thing I do all in," he said. However, TiPost App did not see specific commercial revenue from large models in the prospectus.

This means that the AIGC industry, including the ChatGPT large model, is currently unlikely to generate large-scale revenue.

Of course, in the past few years, when AI technology has been applied to landing scenarios, commercial customers have also been difficult to disclose their identities. This is not because the customers are in a highly confidential industry, but because AI is too widespread, and everyone wants to ride the "digital economy" wave.

For example, in October last year, TiPost App visited a leading sewing machine equipment manufacturer in Taizhou, Zhejiang, which is the world"s top sales company. The AI technology leader, SenseTime, served as the company"s logistics machine equipment and AI technology supplier and cooperated to build an "intelligent intensive warehouse."

After TiPost App visited the actual commercial use of AI technology, they were told that the manufacturing leader did not want to disclose their cooperation with Megvii Technology, not because of the disclosure requirements of the listed company, but because the chairman of the company wanted to claim that it was their self-developed AI technology application, not Megvii"s. According to their disclosure on the Shanghai Stock Exchange, they raised over RMB 1 billion to invest in building an intelligent factory.

Therefore, when TiPost App talked to Megvii Technology CEO Yin Qi about this in March this year, he admitted that the term AI has been overused by everyone, and the popularization of AI has made it difficult for AI companies to promote themselves. Some things were talked about by others three years ago, so "we need to accept the current situation."

"I don"t mind if our customers don"t say it was done by Megvii. For example, Huawei has done so much for operators, but in the early days, everyone just knew that Huawei was a great company, not what they were doing. Later, Huawei also made To C products, so they began to do some brand promotion. If the customer is willing to say it, it means that they also think this is important from this perspective. China needs some topicality for the capital market, and even some O2O companies say AI, which shows that whether it is related or not, everyone is doing AI." Yin Qi told TiPost App.

Founders of several traditional manufacturing companies told TiPost App that they want to cooperate with AI technology R&D companies that do large models, mainly because startups are limited by high costs such as computing power, data, and electricity, and cannot refine large models on their own. By collaborating with AI company teams on data, they can learn the technical secrets and then build their own teams.

"We will never work with an AI company for a long time because the price is too high," a supplier told TiPost App.

According to the prospectus released by Fourth Paradigm in April, although the company"s YoY growth in 2022 was 52.7%, its customer sources are quite diversified, and there are almost no repetitions among the top five customers in the past three years. The customer base is not very fixed.

Fourth Paradigm founder and CEO Dai Wenyuan explained to TiPost App that changes in the top ten clients do not mean that clients are changing every year, and customer retention rates are high, sometimes even reaching 90% per year.

How can companies address application of large models?

To solve the problem of large model implementation, there are mainly three aspects: improving content credibility; solving the problems of high computing costs, repeated training, and limited resources; the price of large models needs to be continuously reduced or vertical domain models can be used for implementation.

The first issue to address is improving content credibility.

Zhou Bowen, in regards to the TiPost App, stated that we should develop a universal large-scale model that can solve practical issues for different users. It should continuously apply feedback through commercial delivery and even require evaluation to solve credibility issues regarding content.

Zhang Bo, an academician of the Chinese Academy of Sciences and Honorary Dean of the Artificial Intelligence Research Institute at Tsinghua University, believes that ChatGPT lacks the ability to self-learn, which is the most fatal flaw of ChatGPT. Therefore, more data needs to be optimized to further address practical application issues.

"Do not assume that ChatGPT can solve all AI problems. Without the ability to re-learn, it cannot cope with changes. This is the same for both domestic and foreign ChatGPT. When I asked American ChatGPT, they gave the same answer. Some Chinese ChatGPT models perform well, while others are incorrect. This raises a significant question for us. We need to apply it to decision-making problems, which requires further resolution," said Zhang Bo.

Xu Qingcai, head of the logistics business unit at SenseTime, mentioned in a recent exchange that currently, large models need to move towards verticalization. Combining scenarios with a unified model and framework can improve content accuracy.

"There is still a certain gap, which comes from the technological infeasibility and the lack of a good method to achieve this. This is what we need to look at now, whether new technologies can bridge this gap. We believe these issues will soon be resolved," said Xu Qingcai.

The second issue is addressing the high cost of computing power and the scarcity of training resources.

Zhang Xin, co-founder of an AI computing power company, mentioned to TiPost Focus that for the GPT-3 model, training on an existing thousand-card cluster for a month costs over $12 million in total, with a single training cycle taking one month. In the first half of this year, the entire industry (training cards) has experienced continuous price hikes of over 25%. However, even in this situation, no one has been able to use commercial domestic chips to train large models.

Among the three elements of data, computing power, and algorithms, computing power is the foundation and competitiveness of large models. However, domestic chips are still lacking in software adaptability and stability compared to Nvidia graphics cards. Zhang Xin believes that the decoupling ability between domestic chips and Nvidia graphics cards is weak. They believe that in the coming months, they may gradually use domestic chips to train models of up to billions, or even larger scales, but accumulating computing power is still an important challenge.

Kong Dehai, co-founder and co-CEO of Lisan Technology, believes that the problem of computing power can be solved from four aspects: first, collaboration, where many calculations can be run in the cloud and coordinated according to demand; second, miniaturization of models, where small models can run on a single machine with high-quality data; third, retraining, where repeated training can help improve the user experience under limited conditions; and fourth, integrated computation.

Currently, the main computing power for AI large-scale models is in the training and inference parts. The highest cost is in the early model training, with most of it using intelligent computing centers or self-funded servers with NVIDIA A800/H800 graphics cards, or using more affordable cloud servers for training. The inference part requires less computing power and is not expensive. Most model applications require a hybrid mode of public and private clouds, and the purchase of certain cloud services to better accommodate large model applications.

Finally, there is the issue of pricing.

Pricing is the most important factor in the commercialization of large models. Due to high training costs and difficulty in data selection, the price for models with billions of parameters can be as high as tens of thousands of yuan, and the high price makes many customers hesitate to purchase.

Dai Wenyuan told TiPost App that not all scenarios or customers can accept the cost of models with billions of parameters. This is a choice that customers need to make. Even if the parameters are in the billions or trillions, it only represents their highest capability, but not all scenarios can necessarily release the technology to customers. Furthermore, the data generation scale for vertical large models will be smaller, the scenarios will be more user-friendly, and the thinking ability of Chat will be higher.

For example, Bloomberg previously released the large financial model BloombergGPT, which is applied in its vertical field; Medialink also released the first medical language model MedGPT in China, which can play practical clinical value in real medical scenarios. Large vertical models are needed in fields such as medical, finance, and e-commerce.

Several AI industry insiders told TiPost App that from an industry perspective, a general model is like an "encyclopedia," able to answer any question and adaptable to different industry environments, while a vertical model is like an expert in a single field, which is professional but has a limited audience. However, the development of large vertical models will continue to improve the performance of models in various fields.

On June 16 of this year, OpenAI made an update, reducing the price of the GPT model by 75% and the input token price of GPT-3.5-turbo by 25%, with the latest price being 0.0001 US dollars per 1k token. Ultraman also mentioned that OpenAI is developing new technology that will allow models to be trained with less data and lower prices.

"When the model is large enough, it can generalize the problem into a common problem and output it naturally. Perhaps in the future, more than 99% of common objects or events can be handled by a single model. The benefits are that it is likely to accelerate commercialization and bring better technological capabilities. Compared with the original method, it may shorten the cycle of industrial application." Yang Fan, co-founder of SenseTime and president of the Large Device Business Group, told TiPost App.

Zhou Hongyi, founder and chairman of 360, recently stated that the emergence of ChatGPT represents the coming of the super AI era. Large models belong to general artificial intelligence and have surpassed humans in many dimensions. At the same time, large models are industrial revolution-level productivity tools that will bring about a new industrial revolution and empower various industries. They can play an important role in the process of transforming the real economy into digital and intelligent.

"I believe that there are no insurmountable technical barriers for China to develop large models. We should thank the success of OpenAI for indicating the technology direction and route for us. Chinese technology companies have great advantages in productization, scenarization, and commercialization. I firmly believe that we can build this large model." Zhou Hongyi stated that China will not have only one large model in the future.

However, from an investment perspective, Wei Zhe, chairman and founding partner of Jia Yu Capital, recently mentioned that "we do not touch large models."

Wei Zhe believes that after many years in the internet industry, it has become clear that the top players always occupy 60-70% of the market share, without exception in areas such as search engines and e-commerce. The same applies to artificial intelligence, and it is difficult for winning large models to exceed two in China, and even in the world outside of China, including the United States.

The large model is a typical winner-takes-all field. It requires more money, more computing power, and more talented people. Better computing power means more people use it, and more people using it means more data. More data means better computing power results. Large models are bound to be a battleground for giants. Giants have money, technology, and most importantly, data.

Regarding the current "battle of the models," as Zhou Hongyi said, the key to large models is to allow more people to use them, combine large model capabilities with more scenarios, and create more landing applications.

Therefore, to summarize, only a few companies can do large models, and there are few opportunities for start-ups. It can even be said that if a company cannot commercialize large models, it will definitely lose in this round of competition.

相關(guān)稿件

"Chinese-Style" OpenAI's "Crazy" 200 Days: Building Models With PPT While Struggling to Find Use Cases|今日熱議

揚帆金海湖 北京平谷戶外新天地_天天快看

[視頻]外資企業(yè)看好中國 持續(xù)增資擴產(chǎn)

環(huán)球熱文:增進中外民心相通,中德學者認為這點很重要

天天熱消息:京津冀魯23日和24日仍有40℃左右高溫,部分地區(qū)或超同期歷史極值

今日熱聞!廣西2條河流2個水文站出現(xiàn)超警洪水 強降雨仍持續(xù)

快看點丨海關(guān)查獲違規(guī)入境象牙制品

火麻的功效_火麻的功效是什么 環(huán)球聚焦

四川白鶴灘至浙江特高壓工程輸電能力達800萬千瓦 天天快看

西藏林芝:森林消防隊員當好防災(zāi)減災(zāi)宣傳員 每日快訊

鬼哭神嚎免費完整播放_鬼哭神嚎好看嗎

天天觀焦點:看川| 粽飄香 消費旺

端午假期,濟南公安“文化喊話”上新了!

觀察:馬力刺客|全新標桿! 奧迪RS e-tron GT輪上功率實測

天際汽車停產(chǎn)后續(xù):新增被執(zhí)行超5036萬|焦點熱文

今日熱文:“中國天眼”,有重要新發(fā)現(xiàn)!

速查!多地公布高考分數(shù)線

擰緊燃氣“安全閥” 護航城市夜經(jīng)濟!懷化市開展夜市燃氣安全排查整治行動

端午檔《消失的她》贏麻了 發(fā)哥“賭神”效應(yīng)卻失靈了?

上海開展燃氣安全隱患排查工作-天天播資訊

二類銀行卡怎么升一類 銀行卡一類卡和二類卡的區(qū)別-看熱訊

【獨家焦點】海南共和:青少年模擬法庭有聲有色

東珠珍寶博物館落戶

今日精選:來收藏!2023年海底電纜股票的龍頭股有哪些?(6/23)

【天天聚看點】AIGC行業(yè)股票名單一覽(2023/6/23)

用友ERP生產(chǎn)管理系統(tǒng)實驗教程 U8.72版_關(guān)于用友ERP生產(chǎn)管理系統(tǒng)實驗教程 U8.72版介紹 天天熱點

每日視點!補助補貼!合肥發(fā)布14條細則!

蘇常柴A:已直接、間接投入新能源電池鋰電隔膜行業(yè)2.51億元

新明中國(02699)接獲聯(lián)交所復牌指引 繼續(xù)停牌|焦點播報

藏在水庫里偷魚的鱷雀鱔終于被抓住了


久久久精品2019中文字幕神马_欧美亚洲一区三区_欧美大人香蕉在线_精品88久久久久88久久久_中文字幕一区二区三区在线播放 _精品国产一区二区三区久久影院_欧洲av在线精品_粉嫩av一区二区三区_亚洲欧美一区二区三区极速播放_国产亚洲精品久久
美女尤物国产一区| 国产欧美va欧美不卡在线| 亚洲丰满少妇videoshd| 一区二区三区在线影院| 亚洲综合在线视频| 亚洲成人你懂的| 蜜臀av性久久久久蜜臀aⅴ四虎| 日韩精品一级二级| 人人狠狠综合久久亚洲| 精品在线一区二区| a亚洲天堂av| 欧美人狂配大交3d怪物一区| 337p粉嫩大胆噜噜噜噜噜91av| 欧美激情一区不卡| 一区二区久久久久久| 日韩高清中文字幕一区| 国产91丝袜在线18| 欧美日韩国产一级二级| 久久久精品综合| 一区二区三区在线免费视频| 久久66热re国产| av在线不卡网| 日韩免费电影一区| 亚洲六月丁香色婷婷综合久久 | 欧美日本一道本| 国产欧美日本一区二区三区| 亚洲h精品动漫在线观看| 国产91丝袜在线播放九色| 欧美系列亚洲系列| 中文无字幕一区二区三区| 舔着乳尖日韩一区| 91浏览器打开| 国产亚洲精品aa午夜观看| 蜜臀久久99精品久久久画质超高清 | 在线视频亚洲一区| 国产亚洲综合在线| 免费xxxx性欧美18vr| 欧美三级资源在线| 欧美高清在线一区| 国产成人精品免费视频网站| 欧美一二三四在线| 日韩av一级片| 日韩一区二区免费高清| 亚洲福利视频一区二区| 在线免费精品视频| 亚洲日穴在线视频| 91麻豆高清视频| 一区二区三区在线视频观看| 日本电影欧美片| 亚洲欧美日韩国产手机在线| 成人a级免费电影| 国产精品剧情在线亚洲| 成人短视频下载| 国产精品家庭影院| 色综合久久天天| 亚洲六月丁香色婷婷综合久久 | 国产精品美女久久久久久久久久久| 精品一二线国产| 2021国产精品久久精品| 国产91丝袜在线18| 国产精品免费观看视频| 91高清在线观看| 日韩国产在线观看| 精品国产一区二区亚洲人成毛片| 国产一区二区h| 中文字幕一区二区三区在线观看 | 99久久精品国产一区二区三区| 国产精品久久久久一区二区三区| 91美女在线观看| 舔着乳尖日韩一区| 国产欧美一区二区三区网站| 日本韩国欧美三级| 蜜臀av一区二区| 国产精品国产三级国产aⅴ入口| 色婷婷一区二区三区四区| 全部av―极品视觉盛宴亚洲| wwww国产精品欧美| 色综合av在线| 久久99九九99精品| 亚洲精品日韩专区silk| 精品理论电影在线| 色婷婷综合久色| 精品影院一区二区久久久| 一区二区三区丝袜| 国产日韩v精品一区二区| 91精品国产综合久久久蜜臀粉嫩| 国产精品资源网站| 丝袜亚洲另类欧美| 亚洲人亚洲人成电影网站色| 欧美精品一区二| 91精品国产入口| 欧美系列日韩一区| 91在线丨porny丨国产| 精品在线免费观看| 日本不卡123| 亚洲在线一区二区三区| 国产精品久久久久aaaa樱花| 日韩精品中文字幕在线一区| 欧美日韩国产综合久久| 色综合亚洲欧洲| 大陆成人av片| 国产麻豆视频一区| 奇米四色…亚洲| 午夜精品久久久久久| 国产欧美日韩中文久久| 欧美成人综合网站| 日韩欧美在线网站| 精品99久久久久久| 亚洲靠逼com| 国产精品美女久久久久aⅴ国产馆| 久久一留热品黄| 欧美日韩激情在线| 日本高清视频一区二区| 丁香六月久久综合狠狠色| 免费高清视频精品| 日韩高清在线观看| 天堂精品中文字幕在线| 亚洲一级不卡视频| 最新中文字幕一区二区三区| 色综合天天综合网国产成人综合天 | 国产精品白丝jk黑袜喷水| 美女mm1313爽爽久久久蜜臀| 日本亚洲一区二区| 天天影视网天天综合色在线播放| 亚洲成人自拍网| 视频在线观看国产精品| 亚洲国产精品精华液网站| 亚洲国产欧美在线人成| 亚洲成人av一区| 欧美a一区二区| 国产精品一区二区无线| 国产成人一级电影| 成人av小说网| 99精品桃花视频在线观看| 欧美性猛交xxxxxxxx| 91精品久久久久久久99蜜桃| 久久久噜噜噜久久中文字幕色伊伊 | 久久蜜桃一区二区| 亚洲视频免费在线| 亚洲妇熟xx妇色黄| 国产一区在线不卡| 99久久99久久久精品齐齐| 欧美日本韩国一区| 中文字幕精品一区二区精品绿巨人| 一区二区三区日韩| 韩国女主播成人在线观看| 成人三级伦理片| 欧美日韩国产乱码电影| 久久精品亚洲麻豆av一区二区 | 欧美一卡2卡3卡4卡| 欧美激情一区二区三区四区| 亚洲一二三四区不卡| 精品一区二区三区久久| 在线看日本不卡| 久久婷婷综合激情| 婷婷亚洲久悠悠色悠在线播放| 美女网站在线免费欧美精品| 99国产精品国产精品毛片| 这里只有精品99re| 亚洲四区在线观看| 国产一区二区三区日韩 | 午夜视频在线观看一区二区三区| 精品亚洲免费视频| 欧美一区二区三区免费| 玉米视频成人免费看| 成av人片一区二区| 国产精品资源站在线| 欧美一区中文字幕| 亚洲综合久久av| 99国产麻豆精品| 国产视频视频一区| 国产自产2019最新不卡| 91精品国产综合久久精品app| 亚洲视频资源在线| 国产v综合v亚洲欧| 精品久久久网站| 久久精品999| 日韩免费视频线观看| 天天操天天综合网| 欧美日韩1234| 视频一区二区国产| 国产不卡在线视频| 久久精品亚洲国产奇米99| 久久99在线观看| 久久综合资源网| 国产精品亚洲一区二区三区在线 | 欧美精品v日韩精品v韩国精品v| 亚洲欧美日韩国产一区二区三区| 高清不卡在线观看av| 成人美女视频在线看| 欧美日韩亚洲综合在线 欧美亚洲特黄一级| 欧美精品成人一区二区三区四区| 欧美亚洲一区三区| 制服丝袜中文字幕一区| 久久综合色播五月| 亚洲综合一区二区三区| 国产精品欧美综合在线| 亚洲欧美色综合| 国产伦精品一区二区三区免费迷| 激情丁香综合五月|