Academic Writing.2: Artificial Intelligence

<The> Artificial Intelligence

<The> Intelligence <has> created great temptation for <to> the researcher who wants to know the secret of the universal. Researchers hoped to build artificial intelligence on the computer to simulate <the> real intelligence. However, Different researchers have different definitions of artificial Intelligence (AI). From ‘Artificial Intelligence system is the study of how to make computers do things at which, at the moment, people are better’ by Rich [1] to ‘An intelligence system is one whose expected utility is the highest that can be achieved by any system with the same computational limitations’ by Russell [2], numerous researchers have contributed to the field of artificial intelligence. This brief history of this field will be reviewed in this essay.

智能對于想探索宇宙奧秘的研究者擁有巨大的誘惑力. 研究者希望在計算機算建立人工智能以模仿真實的智能. 但是, 不同的研究者對於人工智能有不同的定義, Rich在1983年定義為: 人工智能是研究如何讓計算機做在現階段人做的更好的事情[1]. Russell定義為智能系統是相同的計算局限下, 系統可以獲得最大的功效[2]. 無數研究者對人工智能做出了貢獻, 本文將簡單描述人工智能研究的歷史.

With the idea ‘The world can be represented as a physical symbol system’, the traditional AI researchers used the symbol system to simulate the relationship in the world. For instance, If A is B’s parent, then B is A’s child. This can be represented as (equal (parent A B) (Child B A)) in lisp.

整個世界可以表示為一個物理符號系統,帶著這樣的想法,早期的研究者使用符號系統來表示世界上存在的關係,例如,如果A是B的parent, B就是A的child,這個在lisp中就可以表示為(equal (parent A B) (child B A))。

The traditional AI researchers wanted to have ‘an explanation of can, cause, and knows in terms of a representation of the world by a system of interacting automata’ [6]. The researchers hoped the programs would have common sense, which would means the computer becoming <became> a ‘Logic Theory Machine’ [5]. The AI researchers need to know the structure of human’s knowledge first: what is knowledge? How to represent knowledge? The problems need to be solved by Philosophy. However, like Wittgenstein has written, Logic cannot deduce any new information; knowledge is too complicated to represent. This approach <has> lost favor over <last> the three decades after it begin.

傳統AI研究者希望擁有”通過自動影響機器來用 ‘能夠’, ‘原因’, 和 ‘知道’的解釋來表示真實世界”[6]. 研究者希望程序能夠擁有’常識’, 也就是說, 計算機成為’邏輯理論機器’[5]. AI研究者首先需要知道人類知識的結構: 什么是知識? 怎樣表示知識? 這些問題需要哲學來解決. 但是,如同維特根斯坦所言:邏輯不能演繹出新的知識。知識本身過於複雜,難以被符號表示。在持續了三十年後,這種研究方法逐漸淡化。

Modern AI research was termed machine learning, which was based on <the> probability and statistical learning. Machine Learning is concerned with the design and development of algorithms that allow computers to improve their performance over time based on data, such as from sensor data or databases. Machine Learning divided into supervised Learning and unsupervised Learning <depend>depending on whether it has the measurement of the outcome or not. Machine Learning <Choosing> chooses the features of data, then <classifying> classifies or regressing the data to one or two dimensional space. Through the analysis of training data, a model can be found to predict behaviors of new objects and describe group tendency. The problem was first represented in high dimensional space, and then dimensions <was reduces> were reduce after mathematical transformation. However, these advantages are not without drawback. It is difficult to find the proper mathematical transformation function.

現代人工智能研究被稱為機器學習,其基礎是概率和統計學習. 機器學習在于設計和發展算法譲計算機通過在長時間處理來自傳感器或者數據庫的數據,以增強性能. 根據輸出有無衡量目標, 機器學習可以分為有監督學習和無監督學習. 機器學習選擇數據的特征, 將數據分類或回歸到一維或二維空間. 通過對訓練集的分析,可以建立一個模型來分析新的對象的行為或整個群體的趨勢。問題先被表述在高維空間,在經過數學變換後,再降低維度。但是,優點往往也是缺點,這種方法的困難之處是難以找到適合的數學變換方程。

The researchers also are learning from the real world and society. This approach of AI research is termed computational intelligence. The neural network (NN) learning from the human brain, the genetic algorithm (GA) simulating human evolution, the swarm intelligence learning (SI) from ants or birds; to sum up, computational research learning from natural life. For example, The Particle swarm Optimization (PSO) is <one> a type of the swarm intelligence. It was <firstly> first introduced by Eberhart and Kennedy in 1995 [3, 4]. The PSO was first designed to simulate birds seeking food which is defined as a “Cornfield vector”. The bird would <found> find food through social cooperation with other birds around it (within its neighborhood). In other words, when a flock of birds go to seek some food, the single bird<, when a flock of birds going to seek some food,> <did> does not know the entire group’s position. The bird knows only a few of bird’s <position> positions which around it. This bird’s position changed quickly, and it was determined by its neighborhood’s position. Every bird goes <going> to find their <his> local best position, then the entire group finds the global best. However, this approach also has <have> its drawback, insufficient mathematical analysis is the deficiency of this approach.

智能研究者也模仿真實世界和人類社會,這種研究思路稱為計算智能。神經網絡學習人腦細胞運作,遺傳算法模擬人類進化,群體智能從螞蟻或鳥群中學習;總而言之,計算智能從自然生物中學習。例如, 粒子群優化算法就是一種群智能.這種方法在1995年首次由Eberhart 和Kennedy介紹[3, 4].粒子群優化最早被設計用來模擬鳥群尋找食物的行為, 這被稱為’麥田向量’. 鳥通過與它周圍的鳥群的社會合作來尋找食物. 換而言之, 單一的小鳥, 在一群鳥都尋找食物時,并不知道整個群體的位置.這只鳥只知道它周圍鳥群的位置, 這只鳥迅速變換它的位置,這些鳥都去尋找它們的單個最優位置,整個群體得到他們的群體最優位置.但是這種方式也有缺陷, 缺乏數學分析是這種方法的不足之處。

Different problems need different methods, like the ‘no free lunch theory’ suggests. Further research is needed in the AI field to determine good solutions for problems and represent the physical world.

如‘no free lunch theory’所證明的,不同的問題需要不同的解決辦法。在人工智能領域,取得問題的最優解,使用智能來表示物理世界,還需要進一步的研究。

References

[1] Rich, E. (1983). Artificial Intelligence. New York: McGraw-Hill

[2] Fogel, David B. Evolutionary Computation. IEEE Press

[3] Eberhart, R. C., Kennedy, J. (1995). A New Optimizer Using Particle Swarm Theory. Proc. Sixth International Symposium on Micro Machine and Human Science (Nagoya, Japan), IEEE Service Center, Piscataway, NJ, 39-43

[4] Eberhart, R. C., Kennedy, J. (1995). Particle Swarm Optimization. Proc. IEEE International Conference on Neural Networks (Perth, Australia), IEEE Service Center, Piscataway, NJ, pp. IV: 1942 – 1948

[5] John McCarthy. Programs with Common Sense.

[6] John McCarthy, Patrick J. Hayes. Some Philosophical Problems from the standpoint of Artificial Intelligence.

Technorati Tags:
Posted in Academic Writing | Leave a comment

Academic Writing.1: The Intelligence

Different researchers have different definitions of artificial Intelligence (AI). From ‘Artificial Intelligence system is the study of how to make computers do <thing> things at which, at the moment, people are better’ by Rich (1983, p.1) to ‘An intelligence system is one whose expected utility is the highest that can be achieved by any system with the same computational limitations’ by Russell (quoted in Ubiquity, 2004), numerous researchers have contributed to <in> the field of artificial intelligence. This brief <briefly> history of this field will be reviewed in this essay.

不同的研究者對於人工智能有不同的定義,Rich在1983年定義為:人工智能是研究如何讓計算機做在現階段人做的更好的事情。Russell定義為智能系統是相同的計算局限下,系統可以獲得最大的功效。無數研究者對人工智能做出了貢獻,本文將簡單描述人工智能研究的歷史。

With the <ideas> idea ‘The world can be represented as a physical symbol system’, the AI researchers used <using> the symbol system to simulate <simulation> the relationship in the world. For instance, If A is B’s parent, then B is <A is> A’s child. This can be represented <represents> as (equal (parent A B) (Child B A)) in lisp. However, <But> like Wittgenstein has written <writing>, Logic cannot deduce any new information <‘Logic and deduction cannot bring any new information’>; <the> knowledge is too complicated to represent. This approach <method> has lost favour over last <was decayed after consistent> three decades <decade>.

整個世界可以表示為一個物理符號系統,帶著這樣的想法,早期的研究者使用符號系統來表示世界上存在的關係,例如,如果A是B的parent, B就是A的child,這個在lisp中就可以表示為(equal (parent A B) (child B A))。但是,如同維特根斯坦所言:邏輯不能演繹出新的知識。知識本身過於複雜,難以被符號表示。在持續了三十年後,這種研究方法逐漸淡化。

Modern <The modern> AI research was termed <named as> machine learning, which was based on the probability and statistical learning. Through the analysis of <the> training data, <the> A model can be found to predict behaviors of new objects <object> and describe group tendency. The problem was first <firstly> represented in high dimensional space, and then dimensions was reduces <reduces some dimension> after mathematical transformation <mathematic transform>. However, these advantages are not without drawback <But the advantage is the deficiency in some way>. It is difficult to find the proper mathematical <mathematic> transformation <transform> function.

現代人工智能研究被稱為機器學習,基礎是概率和統計學習,通過對訓練集的分析,可以建立一個模型來分析新的對象的行為或整個群體的趨勢。問題先被表述在高維空間,在經過數學變換後,再降低維度。但是,優點往往也是缺點,這種方法的困難之處是難以找到適合的數學變換方程。

The researchers also are learning from <form> the real world and society <social>. This approach of AI research is termed <named as> computational intelligence. The neuron network learning from the human <human’s> brain, the genetic algorithm simulating <simulate the> human evolution, the swarm learning from ants or birds; to sum up <, in a word>, <the> computational research learning from natural life <nature>. Insufficient <The insufficient> mathematical <mathematic> analysis is the deficiency of this approach.

智能研究者也模仿真實世界和人類社會,這種研究思路稱為計算智能。神經網絡學習人腦細胞運作,遺傳算法模擬人類進化,群體智能從螞蟻或鳥群中學習;總而言之,計算智能從自然生物中學習。缺乏數學分析是這種方法的不足之處。

Different problems need <problem needs> different methods, like the ‘no free lunch theory’ suggests <suggestion>. Further research is needed in <on> the AI field <fields> to determine <the> good solutions <solution> for <of> problems and represent the physical world.

如‘no free lunch theory’所證明的,不同的問題需要不同的解決辦法。在人工智能領域,取得問題的最優解,使用智能來表示物理世界,還需要進一步的研究。

References

Rich, E. (1983). Artificial Intelligence. New York: McGraw-Hill

Fogel, David B. Evolutionary Computation. IEEE Press

Technorati Tags:
Posted in Academic Writing | Leave a comment

Bayesian Decision Theory 贝叶斯决策论 2

Suppose that we know both the prior probabilities P(clip_image002) and the conditional densities p(x|clip_image002[4]) for j = 1, 2. Suppose further that we measure the Feature X of an object and discover that its value is x. How does this measurement influence our attitude concerning the true state of nature — that is, the category of the object? We note first that the (joint) probability density of finding a pattern that is in category clip_image002[28] and has feature value x can be written two ways:

p(clip_image002[6], x) = P(clip_image002[8]|x)p(x) = p(x|clip_image002[10])P(clip_image002[12] ). Rearranging these leads us to the answer to our question, which is called Bayes formula:

假設已知先驗概率P(clip_image002[14]) ,也知道類條件概率密度p(x|clip_image002[16])且 j = 1,2. 並且假設通過觀察和測量,我們知道一個對象的特徵X的測量值為x。此測量結果將如何影響我們所關心的類別狀態——也就是對象的分類呢?首先注意到處於類別clip_image002[26] 並具有特徵值 x 的模式的聯合概率密度可以寫成兩種形式:p(clip_image002[18], x) = P(clip_image002[20]|x)p(x) = p(x|clip_image002[22])P(clip_image002[24])重新組織一下上式可以得到問題的答案,這就是‘貝葉斯公式’:

clip_image002[30]                 (1)

Where in this case of two categories

在兩類問題的情況下

clip_image002[32]               (2)

Bayes formula can be expressed informally in English by saying that

貝葉斯公式可以非正式的用英語表示為

clip_image002[34]       (3)

Bayes formula shows that by observing the value of x we can convert the prior probability P(clip_image002[36]) to the a posteriori probability (or posterior) probability clip_image002[44] posterior — the probability of the state of nature being clip_image002[46] given that feature value x has been measured. We call clip_image002[40] the likelihood of clip_image002[48] with respect to x (a term likelihood chosen to indicate that, other things being equal, the category clip_image002[50] for which clip_image002[42] is large is more “likely” to be the true category). Notice that it is the product of the likelihood and the prior probability that is most important in determining the posterior probability; the evidence factor, p(x), can be viewed as merely a scale factor that guarantees that the posterior probabilities sum to one, as all good probabilities must.

贝叶斯公式表明,通过观测 x 的值可将先验概率P(clip_image002[74])转换为后验概率 clip_image002[76] 即假设特征值 x 已知的条件下属于clip_image002[72] 的概率。我们称 clip_image002[66]clip_image002[68] 关于 x 的似然(likelihood)函数,表明在其他条件都相等的情况下,使得clip_image002[64] 较大的clip_image002[70] 更有可能是真实的类别。注意到后验概率主要是由先验概率和似然函数的乘积所决定的,证据(evidence)因子 p(x) 可仅仅看成是一个标量因子,以保证个类别的后验概率总和为 1从而满足概率条件。

The variation of  clip_image002[54] with x is illustrated in figure 2 for the case P(clip_image002[58]) = 2/3 and P(clip_image002[62]) = 1/3.

clip_image002[52] 随 x 的变化如图2 所示,此时P(clip_image002[56]) = 2/3,P(clip_image002[60]) = 1/3.

image

Figure 2. Posterior probabilities for the particular priors P(clip_image002[78]) = 2/3 and P(clip_image002[88]) = 1/3 for the class-conditional probability densities shown in figure 1. Thus in this case, given that a pattern is measured to have feature value x = 14, the probability it is in category clip_image002[90] is roughly 0.08, and that it is in clip_image002[80] is 0.92. At every x, the posteriors sum to 1.0.

图2. 在先验概率P(clip_image002[82]) = 2/3,P(clip_image002[86]) = 1/3 及图1给出的类条件概率密度的条件下后验概率图。在此情况下,假定一个模式具有特征值 x = 14,那么它属于类clip_image002[92] 的概率约为0.08,属于clip_image002[84]的概率约为0.92. 在每个x处的后验概率之和为1.0.

Technorati Tags:

Posted in Pattern recognition | 2 Comments

Bayesian Decision Theory 贝叶斯决策论 1

Bayesian decision theory is a fundamental statistical approach to the problem of pattern classification. This approach is based on quantifying the tradeoffs between various classification decisions using probability and the costs that accompany such decisions. It makes the assumption that the decision problem is posed in probabilistic terms, and that all of the relevant probability values are known.

贝叶斯决策论是解决模式分类问题的一种基本统计途径.其出发点是利用概率的不同分类决策与相应的决策代价之间的定量折中.它做了如下的假设,即决策问题可以用概率的形式来描述,并且假设所有相关的概率结构均已知.

Use ω denote the state of nature, with ω = clip_image002 for A and ω = clip_image004 for B. Because the state of nature is so unpredictable, we consider ω to be a variable that must be described probabilistically.

用ω表示类别状态,那么当ω = clip_image006 时是A,而ω = clip_image008 时是B. 由于类别状态不确定,可以假设ω是一个由概率来描述其特性的随机变量.

Generally, we assume that there is some a priori probability (or simply prior) P(clip_image002[1]) that the next object is A, and some prior probability P(clip_image004[1]) that it is B. If we assume there are no other types relevant here, then P(clip_image002[2]) and P(clip_image004[2]) sum to one. These prior probabilities reflect our prior knowledge of how likely we are to get an A or B before the object actually appears.

一般情况下,假定存在‘先验概率’,下一个对象是A的先验概率为P(clip_image002[3]),而下一个对象是B的先验概率为P(clip_image004[3]).假定没有其他类别,所以P(clip_image002[4]) + P(clip_image004[4]) = 1.这些先验概率反映了在实际的对象出现之前,我们所拥有的对于可能出现的对象类别的先验知识.

Assume feature X measurement x. Different objects will yield different features and we express this variability in probabilistic terms; we consider x to be a continuous random variable whose distribution depends on the state of nature, and is expressed as p(x|ω) [1]. This is the class-conditional probability density function, the probability density function for x given that the state of nature is ω. (It is also sometimes called state-conditional probability). Then the difference between p(x|clip_image002[5]) and p(x|clip_image004[5]) describes the difference in feature X between populations of A and B .(figure 1)

假定特征X的衡量指标x,不同的对象特征不同,将其表示成概率形式的变量,假定x是一个连续随机变量,其分布取决于类别状态,表示为p(x|ω)的形式. 这就是‘类条件概率密度’函数,即类别状态为ω时的x的概率密度函数(有时也称为状态条件概率密度). 于是p(x|clip_image006[1]) 与p(x|clip_image008[1]) 间的区别就表示了A与B的特征X的区别.(图1)

clip_image010Figure1.Hypothetical class-conditional probability density functions show the probability density of measuring a particular feature value x given the pattern is in category clip_image012 If x represents the feature X, the two curves might describe the difference in feature X of two types. Density functions are normalized, and thus the area under each curve is 1.0.

图1. 假定的类条件概率密度函数图,显示了模式处于类别clip_image014时观测某个特定特征值x的概率密度。如果x是特征X的测量值,这两条曲线可描述两个类别在特征X上的区别。概率函数已归一化,每条曲线下的面积为1.

 


[1] Generally use an upper-case P(·) to denote a probability mass function and a lower-case p(·)to denote a probability density function.

通常用大写P(·)的表示概率分布函数,小写的p(·)表示概率密度函数。

Technorati Tags:

Posted in Pattern recognition | Leave a comment

联句

# 起因:偶然看见同学结婚的消息,向Xiangyue Meng念叨下,便有了联句的前两句,此后便层层叠叠

Shi Cheng:
    遍看旧友成新婚
    怒向刀丛觅小诗

Xiangyue Meng:
    愁问春雨觅小诗
    五陵豪情徒仗剑
    三河年少空有才

Wendy Hu:
    愁向春雨觅小诗
    只是风前有所思

Xiangyue Meng
    愁问春雨觅小诗 我不要
    剩下的当然我最好
    古诗还是要有些用典、平仄

Wendy Hu:
    刀丛?
Shi Cheng:
    别人结婚就要送钱给他们,自然是刀丛了

Wendy Hu
    忍看女友成加菲
    只言努力加餐飯
    已收明珠哪敢退
    恨不相逢清癯時

Wendy Hu
    還好我曾經瘦過

Shi Cheng
    妙龄少女坐闺楼
    双眉紧锁为何愁
    思郎远在千万里
    眼下论文何时休

Wendy Hu
    此生未休他生休

Peng Wang:
    待的千峰旧路回
    自有繁花总笑春

Xin Li:
    眼看姑娘成剩女
    奈何孤老且终身

Technorati Tags:

Posted in Life | Leave a comment

Pattern recognition 模式识别

The foundations of pattern recognition can be traced to Plato, which were later extended by Aristotle, who distinguished between an "essential property" (which would be shared by all members in a class or “natural kind” as he put it) from an “accidental property” (which could differ among members in the class). Pattern recognition can be cast as the problem of finding such essential properties of a category. It has been a central theme in the discipline of philosophical epistemology, the study of the nature of knowledge.

有关模式识别基础的讨论最早可追溯到柏拉图,进而被亚里士多德所发展。亚里士多德将事物的性质区分为"本质属性"(指某一类或他称之为"自然属性"(natural kind)的所有成员的共同性质)和"例外属性"(accidental property)(指类中成员之间的不同性质)。而模式识别的任务就是找出某’类’事物的"本质属性"。这也是哲学中认识论所研究的中心问题,即,对知识本质的研究。

maximum likelihood estimation and Bayesian estimation 最大似然估计和贝叶斯估计

Maximum likelihood and several other methods view the parameters as quantities whose values are fixed but unknown. The best estimate of their value is defined to be the one that maximizes the probability of obtaining
the samples actually observed. In contrast, Bayesian methods view the parameters as random variables having some known a priori distribution. Observation of the samples converts this to a posterior density, thereby revising our opinion about the true values of the parameters. In the Bayesian case, we shall see that a typical effect of observing additional samples is to sharpen the a posteriori density function, causing it to peak near the true values of the parameters. This phenomenon is known as Bayesian learning.

最大似然估计(和其他的一些类似方法)把待估计的参数看作是确定性的量,只是其取值未知。最佳估计就是使得产生已观测到样本(即训练样本)的概率为最大值。与此不同的是,贝叶斯估计则把待估计的参数看成是符合某种先验概率分布的随机变量。对样本进行观测的过程,就是把先验概率的参数转化为后验概率密度,这样就利用样本的信息修正了对参数的初始估计值。在贝叶斯估计中,一个典型的效果就是,每得到新的观测样本,都使得后验概率密度函数变得更加尖锐,使其在待估参数的真实值附近形成最大的尖峰。这个现象就称为“贝叶斯学习”过程

supervised learning and unsupervised learning 有监督学习和无监督学习

In both cases, samples x are assumed to be obtained by selecting a state of nature clip_image002[1]with probability P(clip_image002[2]), and then independently selecting x according to the probability law p(x|clip_image002[3]). The distinction is that with supervised learning we know the state of nature (class label) for each sample, whereas with unsupervised learning we do not.

相同点:产生某个样本x的过程都是:首先根据先验概率P(clip_image002[4])选择自然状态clip_image002[5],然后在自然状态clip_image002[6]下,独立的(即不受其他自然状态的影响)根据类条件概率密度p(x|clip_image002[7])来选取x。

不同点:在估计概率密度时,有监督学习问题的每一个样本的所属的自然状态clip_image002[8](有时侯称为这个样本的"标记" (label))都是已知的,而对于无监督学习问题,每个样本的自然状态是未知的。

Technorati Tags:

Posted in Pattern recognition | Leave a comment

emacs 配置文件

系统:windows vista business
emacs: emacs-22.3-bin-i386
emacs安装
下载emacs-22.3-bin-i386.rar后,解压缩
运行 ./emacs/bin 下 addpm.exe, 在开始菜单创建快捷方式 (./为 emcas 解压缩目录)(可选)
emacs配置文件位置
.emacs : C:\Users\cheng\AppData\Roaming (其中 cheng 为计算机用户名)
.emacs为主要配置文件
.emacs.d 文件夹: C:\Users\cheng\AppData\Roaming (其中 cheng 为计算机用户名)
.emacs.d 文件夹中存放自己编写的配置文件
例如: 新建 配置文件 example.el在.emacs 中的导入命令为:(load "~/.emacs.d/example.el")
若未发现配置文件.emacs可执行一下步骤:
1) 启动 emacs
2) 把下面这段代码粘贴到 Windows 剪贴板里
3) 切换到 emacs 窗口,用 Ctrl+y 把剪贴板里的代码复制进去
4) 先按 Ctrl+x,再按 Ctrl+e
———————-begin—————————
(let ((home (getenv "HOME")))
(if (and (stringp home)
(not (string= "" home))
(file-exists-p home))
;; After all, we have a home
(let ((file (concat home "/.emacs")))
(if (file-exists-p file)
;; It’s there!
(find-file file)
;; create one
(switch-to-buffer (find-file file))
(insert ";;; This is your .emacs\n")
(save-buffer)))
;; Poor w32 users are usually homeless
(if (y-or-n-p "You don’t have a valid %HOME%, do you want a C:\\.emacs?")
(progn
(switch-to-buffer (find-file "C:/.emacs"))
(insert ";;; Poor w32 users are usually homeless\n")
(save-buffer))
(message "Nothing changed"))))
————————-end—————————–
以下是我的 .emacs 文件 中的配置命令 (下載后需將文件名 emacs 修改為 .emacs)

——————- begin ——————————-

;; 不用 TAB 来缩进,只用空格。
(setq-default indent-tabs-mode nil)
(setq default-tab-width 4)
(setq tab-stop-list nil)

;; 不用滚动条
(set-scroll-bar-mode nil)

;; 关闭开机画面
(setq inhibit-startup-message t)

;; 关闭按钮栏
(tool-bar-mode -1)

;; 没有提示音,也不闪屏。
(setq ring-bell-function ‘ignore)

;; 使用折行
(setq truncate-partial-width-windows nil)

;; 默认的主模式
(setq default-major-mode ‘text-mode)

;; 显示括号匹配, 而不是匹配后短暂的跳到另一个括号
(show-paren-mode t)
(setq show-paren-style ‘parentheses)

;; 光标靠近鼠标时鼠标跳开
(mouse-avoidance-mode ‘animate)

;; 可以显示图片
(auto-image-file-mode t)

;; 窗体最大化和恢复
(defun w32-restore-frame ()
"Restore a minimized frame"
(interactive)
(w32-send-sys-command 61728))

(defun w32-maximize-frame ()
"Maximize the current frame"
(interactive)
(w32-send-sys-command 61488))

;; 不要问 yes-or-no,只问 y-or-n
(defalias ‘yes-or-no-p ‘y-or-n-p)

;; 不要 C-x C-c 就退出。确定了才退出。
(setq kill-emacs-query-functions
(lambda ()
(y-or-n-p "Do you really want to quit? ")))

;; 设置 emacs 的标题
(setq frame-title-format "cheng@%f")

;;;; 显示行号
(setq column-number-mode t)
(setq line-number-mode t)

;;;; 显示时间
(setq display-time-24hr-format t)
(setq display-time-day-and-date t)
(display-time)

; my C/C++ code style
(defun my-c++-mode-hook()
(setq tab-width 4 indent-tabs-mode nil)
(c-set-style "stroustrup")
(c-set-offset ‘inline-open 0)
(c-set-offset ‘innamespace 0)
(c-set-offset ‘friend ‘-) )
(add-hook ‘c++-mode-hook ‘my-c++-mode-hook)
(add-hook ‘c-mode-hook ‘my-c++-mode-hook)

;; 设置字体和字体大小
(setq default-frame-alist ‘(
(font . "-*-courier new-normal-r-*-*-18-*-*-*-*-*-*-gb2312-*")))

;;直接跳到某一行
(global-set-key [(meta g)] ‘goto-line)

————— end ——————-

Technorati Tags: ,
Posted in Software | 4 Comments

pidgin 中文乱码解决方法

pidgin和msn交谈时遇到乱码,解决办法:

1)Tools=>Preferences=>Conversations:
在Font 下清除"Use font from theme"
conversation font 中手动选一个可以显示中文的字体 (例如Simsun),大小可设为10。

2)Tools=>Plug-ins=>Conversation Colors=>Ignore incoming formats

Technorati Tags: ,
Posted in Software | Leave a comment

网页复制,字符转换

网页复制之后,每一行有↓,如何变成回车?

1,先放到文本文件里面然后再拷贝到word中。
2,^l换成^p

Technorati Tags: ,
Posted in Software | Leave a comment

Euler Project.1

If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.
如果我们列出10以下可以被3或者5整除的自然数,有:3,5,6,9。这些数的和为23。
Find the sum of all the multiples of 3 or 5 below 1000.
求所有1000以下所有可以整除3或者5的数之和。
answer: 233168
solution:
Haskell :  

   1:  sum[n|n <-[1..999], n `mod`5==0 || n `mod` 3==0]

C#:

using System;

namespace multiples
{
    class Program
    {
        static void Main(string[] args)
        {
            int multiples3 = 0;
            int multiples5 = 0;

            for (int i = 0; i < 1000; i++)
            {
                if ((i % 3) == 0)
                {
                    multiples3 += i;
                }

                else if ((i % 5) == 0)
                {
                    multiples5 += i;
                }
            }
            int result = multiples3 + multiples5;

            Console.WriteLine("the result is " + result);
        }
    }
}

Technorati Tags: ,
Posted in Programming | Leave a comment