Posts

Artificial General Problem Solver

Image
Chinese Version:  通用人工问题求解器:一种生成式框架 - 知乎 OpenAI ChatGPT 4o assisted in text refining for this article. Why Artificial General Problem Solver? Instead of aiming for the grandiose and often vague goal of Artificial General Intelligence, which aspires to replicate the full breadth of human-like cognition, it can be far more productive to focus on developing an Artificial General Problem Solver. This approach defines intelligence in actionable, testable terms: the capacity to define a problem and learn from its world of objectives, inputs, and outputs, and to find solutions systematically from such a world model of the problem. By concentrating on problem-solving as the core function, AI research can remain grounded in clear objectives, interpretable reasoning, and practical applications, while avoiding the pitfalls of anthropomorphism and overgeneralization. In essence, an Artificial General Problem Solver reframes the quest for “intelligence” into building systems that understand, mo...

Praising the Nobel Prize Committee for Breaking in the Wall between Physics and Computer Science

The Nobel Prize committee is so brave and progressive to break the wall between physics and computer science. After all, knowledge is knowledge, when learnt, it all becomes a unified entity in one's mind. The separation of "disciplines" exists only to show the material limitation of the human brain, such that it is impossible for one person to learn every piece of knowledge in his single lifetime. However, it wasn't the case when rational thinking first became a thing -- everyone was able to learn every piece of knowledge, until knowledge exploded and humans have to invent the concept of "disciplines" to preserve knowledge and to school. It will not be the case in the future either, when AI becomes a universal tool for problem solving, rendering "scientific understanding" unnecessary and impossible. Separation of disciplines of the Nobel Prize will be outdated as well when that happens.

Dynamic Generative Control

Chinese version:  动态生成式控制 - 知乎 (zhihu.com) In several previous blog posts, we formulated the control problem as a generative model in the following fashion: \[[v,w,x] = g(z), \quad z \sim \mathbb{D},\] in which \(v\) is the collection of control objecties to be minimized, \(w\) contains the output control signals, and \(x\) represents the input signals that are thought to offer information to the control problem. When the system is dynamic, the form above lacks the explicit modeling of dynamic changes. The simplest form of a dynamic generative control model can be written as \[\begin{bmatrix} v_t & w_t & x_t \\ v_{t-1} & w_{t-1} & x_{t-1} \end{bmatrix} = g(z), \quad z \sim \mathbb{D}.\] Now, \(g(z)\) is able to generate system states before and after a control step, therefore after training, it must be able to model the dynamic changes in the controlled system. The effect of this is a simpler dynamic control algorithm than that of a static model : \[ \begin{align} ...

Dynamic System Control using a Static Generative Model

Chinese version:  静态生成式模型的动态系统控制算法 - 知乎 (zhihu.com) Recap from the last blog post, we formulated the control problem as a generative model in the following fashion: \[[v,w,x] = g(z), \quad z \sim \mathbb{D},\] in which \(v\) is the collection of control objectives to be minimized, \(w\) contains the output control signals, and \(x\) represents the input signals that are thought to offer information to the control problem.  This model is static for the lack of explicit formulation of the continuous evolution of the system states \([v,w,x]\). It is possible to change the model to a dynamic one, but in this blog post we show an algorithm that can offer dynamic control using a static model. \[ \begin{align} & \text{Initialize } \epsilon_0 \text{ and } [v_0, x_0] = r (u_0, w_0) \\ & \text{for } t = 1 \text{ to } \infty \text{ do} \\ & \quad \text{Initialize } [v', w', x'] = [v_{t-1}, w_{t-1}, x_{t-1}] \\ & \quad \text{While } v_{t-1} - v' >  \epsilon_{t...

Generative Control Inference

Chinese version:  生成式控制论的推理算法 - 知乎 (zhihu.com) In the previous post , I gave a peek into generative control, a new idea that can control some system by learning its intrinsic characteristics as a generative world model. It can avoid the over-shoot and over-expoloration problem from PID control and reinforcement learning. Rewrite the formuation a bit, we have \[[v,w,x] = g(z), \quad z \sim \mathbb{D},\] in which \(v\) is the collection of control objectives to be minimized, \(w\) contains the output control signals, and \(x\) represents the input signals that are thought to offer information to the control problem. During training, we are presented with a dataset of \([v,w,x]\) triplets, each representing the control objective \(v\) achieved under the output and input condition \([w,x]\). We have an algorithm that can produce \(g\), which is a generative model that can disambiguate the uncertainty of multiple acceptable \(v\), \(w\) or \(x\)'s, each under the condition of the other ...

A Peek into Generative Control

Chinese version:  一窥生成式控制论 - 知乎 (zhihu.com) As a peek into some of my most recent work in applying generative modeling into various industries, I'm proudly presenting an idea that illustrates how powerful generative models can be for industrial controling. My team and I are working to rapidly expand this idea into many different areas, and we still haven't seen its limit yet. A General Formulation of Generative Models \[y = g(z), \quad z \sim \mathbb{D}\] where \(y\) is a sample from data, \(g\) is a generator neural network written as a function, and \(z\) follows a pre-defined distribution \(\mathbb{D}\). It is easy to identify that generative adversarial networks (GANs) naturally result in models in the above fashion. For autoregressive language models (transformers or otherwise), \(z\) is the concatenation of all the random sampling variables used during the decoding process. For stable diffusion, \(z\) is the concatenation of all the noise added in the diffusion process. A...

Thoughts on AIGC for Non-AI Industries

1. AIGC is a paradigm shift from goal-oriented problem solving to free-form interactive engineering. It is time to expand our imagination to products that can talk and draw with the customers, on top of being able to completing its own tasks with these interactions. Your fridge can help to order groceries when asked, but can also answer generic questions like ChatGPT does. No reason to limit the AI to do what its shell product is designed to do. For manufacturers of these products, it means better customer stickiness. 2. The entire AIGC economy is in its infancy because right now the paying customers are the tech-savvy people who can afford a few tens of bucks in subscription fees every month. To make it really ubiquitous in every product and every place, the AI model serving cost must be reduced by multiples of thousands of times. When that is achieved, products like GitHub copilots might just be free like Bing search. At that time, every product that is capable of accessing the Inter...