by Charles Cheng, CFA
A.I. these days seems to be a catch all term. In terms of enterprise adoption, most of the hype and activity is around generative A.I. with the success and adoption of ChatGPT and the products of other foundation model providers. A recent survey run by Axios ( ‘AI is “tearing apart” companies, survey finds’, March 18, 2025) showed a deep divide between execs and employees with regards to AI adoption in the workplace. Of particular interest, while 75% of C-suite executives believed their AI adoption efforts were successfully, only 45% of employees did. Across the companies, there appeared to be considerably more enthusiasm for AI among executives versus employees, even though employees would use their own AI tools, with some fearing their roles would be replaced. Even so, 94% of the C-suite respondents said they aren’t satisfied with their AI solution and anecdotally, employees were sick of being handed yet another chatbot. While some of the concerns are well-founded, these discrepancies are in some part due to some misunderstandings of the capabilities of generative A.I.
如今,人工智慧似乎是一個包羅萬象的術語。 在企業應用方面,隨著 ChatGPT 和其他基礎模型提供者產品的成功和使用,大多數炒作和活動都圍繞著生成式人工智慧。 Axios 最近進行的一項調查(“調查發現,人工智慧正在「撕裂」公司”,2025 年 3 月 18 日)顯示,在工作場所採用人工智慧方面,高階主管和員工之間存在巨大分歧。 特別有趣的是,雖然 75% 的高階主管認為他們的人工智慧應用工作取得了成功,但只有 45% 的員工這麼認為。 在所有公司中,高階主管對人工智慧的熱情似乎比員工高得多,儘管員工會使用自己的人工智慧工具,有些人擔心他們的角色會被取代。 即便如此,94% 的高階主管受訪者表示,他們對人工智慧解決方案並不滿意,而且有傳聞稱,員工已經厭倦了再被提供一個聊天機器人。 雖然有些擔憂是有根據的,但這些差異部分是由於對生成人工智慧能力的一些誤解造成的。
Generative A.I. can be thought of as next token prediction. Tokens can be words or parts of words that carry semantic meaning. Models like ChatGPT work by generate responses by predicting the next token in a sequence, making the output, seem fluid and natural. The user can start with sending a prompt to the model and the model will take that prompt to generate the likely response one word at a time, sequentially considering all the previous words. If you type “The cat sat on the…”, the AI predicts “mat” (60% probability), “couch” (30%), “roof” (10%) and picks one (often the most likely), and then continues.
生成式人工智慧可以被認為是下一個符元(token, 文字處理過程中的最小單位)預測。 符元可以是帶有語義意義的單字或單字的一部分。 像 ChatGPT 這樣的模型的工作原理是透過預測序列中的下一個符元來產生回應,使輸出看起來流暢且自然。 使用者可以從向模型發送提示開始,模型將根據該提示一次一個單字地產生可能的回應,並依次考慮所有先前的單字。 如果你輸入“The cat sat on the…”,AI 會預測“mat”(60% 機率)、“couch”(30%)、“roof”(10%)並選擇一個(通常是最有可能的),然後繼續。
Note that all this is based on statistical probabilities, and NOT actual understanding. There are a couple of A.I. related concepts that people tend to conflate with generative A.I. applications. Traditional machine learning focuses on prediction tasks using statistical patterns in data, like forecasting user engagement. Before the transformer revolution brought LLMs to the fore, this was mainly what people were referring to when they talked about A.I. tools in the workplace. However, users shouldn’t expect that when talking to a large language model, it can conduct a data analysis by itself and give an accurate answer unless the application was specifically crafted to accurately pull together a bunch of different tools and inputs to do so on a narrow scope. To have that expectation would most likely result in disappointment on the output.
請注意,所有這些都是基於統計機率,而不是實際理解。 人們往往將一些與人工智慧相關的概念與生成式人工智慧應用混為一談。 傳統的機器學習著重於使用資料統計模式的預測任務,例如預測使用者參與度。 在變換器改革(transformer revolution)使大型語言模型(LLM)脫穎而出之前,這主要是人們在談論工作場所中的人工智慧工具時所指的內容。 然而,用戶不應期望在與大型語言模型對話時,它可以自行進行數據分析並給出準確的答案,除非應用程式經過專門設計,能夠準確地將一堆不同的工具和輸入組合在一起,以便在狹窄的範圍內執行此操作。 抱持這種期望很可能會導致結果令人失望。
On the other end of the spectrum, the capabilities of A.I. is sometimes compared to human intelligence. But as we understand that it is a statistical process rather than consciousness, generative A.I. lacks intent, and true understanding. Therefore, unless your job can be completely decomposed into a series of probabilistic rule-based tasks, you’re unlikely to be replaced by an A.I. version of yourself anytime soon. Sure, there are companies like OpenAI whose mission is achieve A.G.I., or artificial general intelligence, which is generally understood to be human level intelligence. But there is also a wide range of definitions and categories to define A.G.I. and it doesn’t have to mean consciousness, which is very unlikely to be achieved with the current A.I. architectures.
另一方面,人工智慧的能力有時會被拿來與人類智慧進行比較。 但正如我們所知,生成人工智慧是一個統計過程而不是意識,它缺乏意圖和真正的理解。 因此,除非你的工作可以完全分解為一系列基於機率規則的任務,否則你不太可能很快就會被人工智慧版本的自己取代。 當然,也有像 OpenAI 這樣的公司,其使命是實現 A.G.I.,即通用人工智慧,通常被理解為人類層級的智慧。 但還有各種各樣的定義和類別來定義 A.G.I。 它並不一定意味著意識,而目前的人工智慧架構不太可能實現這一點。
Even so, there are many tasks that A.I. models can already do much better than humans, and many of these are tasks that humans would rather not want to spend their time doing anyway. It’s currently easier to discuss what A.I. models are not versus what they are because of the rapid improvement of functionalities and performance of the models, and all the useful tools that the tech ecosystem is building. These can be working behind the scenes to power functionality that will be useful in a daily human workflow beyond asking questions and expecting a solution. When there is a better understanding among professionals about the advantages and limitations of the current architectures, we can move past the chatbot style interfaces and the misplaced expectations that come along with them.
即便如此,人工智慧模型在許多任務上已經可以比人類做得更好,而其中許多任務是人類無論如何都不願意花時間做的。 目前,由於模型的功能和性能以及技術生態系統正在構建的所有有用工具的快速改進,討論人工智慧模型不是什麼比討論它們是什麼變得更加容易。 這些可以在幕後工作,以支援除了提出問題和期待解決方案之外在日常工作流程中有用的功能。 當專業人士對當前架構的優點和限制有了更好的理解時,我們就可以超越聊天機器人風格的介面以及隨之而來的錯誤期望。
This article reflects the personal views of the author and not any firm’s and should not be viewed as an investment recommendation.
● 讀後留言使用指南
近期迴響