《嵌入式实时操作系统uCOS-II》邵(第二版) 对应版本是v252
2021-11-23 09:30:22 38.65MB ucosii
1
加莱Power Panel 产品手册pdf,加莱Power Panel 产品手册
2021-11-23 00:59:52 5.86MB 综合资料
1
方向键可以进行移动,按F键更改纹理,按B键添加光照。 立方体上面的是塞尔曲面,可以通过A键和D键进行旋转,还可以通过W和S键进行变换曲面的扭曲程度。按空格键可以时曲面的轮廓消失。 最里边是一个变幻的圆环 。 .exe文件可以直接执行
2021-11-22 19:19:16 971KB OPEN GL 纹理映射
1
包含25封正常邮件、25封垃圾邮件以及分类器源代码,适合ML初学者使用
2021-11-22 17:42:44 13KB spam ham 朴素贝叶斯 邮件分类器
1
内容包括朴素叶斯算法python实现代码,实现对iris分类,包含iris的txt格式的数据集。
1
c# c++ 塞尔曲线, 只有几行代码就可以实现,非常简单,可以在游戏中使用
2021-11-21 20:56:50 5KB c# c++ 贝塞尔曲线
1
faceRecgSys:使用Matlab的人脸识别系统; 算法:LBP,PCA,KNN,SVM和朴素叶斯
1
Bayesian methods for machine learning have been widely investigated, yielding principled methods for incorporating prior information into inference algorithms. In this survey, we provide an in-depth review of the role of Bayesian methods for the reinforcement learning (RL) paradigm. The major incentives for incorporating Bayesian reasoning in RL are: 1) it provides an elegant approach to action-selection (exploration/ exploitation) as a function of the uncertainty in learning; and 2) it provides a machinery to incorporate prior knowledge into the algorithms. We first discuss models and methods for Bayesian inference in the simple single-step Bandit model. We then review the extensive recent literature on Bayesian methods for model-based RL, where prior information can be expressed on the parameters of the Markov model. We also present Bayesian methods for model-free RL, where priors are expressed over the value function or policy class. The objective of the paper is to provide a comprehensive survey on Bayesian RL algorithms and their theoretical and empirical properties.
2021-11-21 19:28:33 1.81MB 贝叶斯 增强学习 机器学习 深度学习
1
线性高斯分布: P(c | h, subsidy) = N(ath + bt, t2)(c) = 1/ (t21/2) e –1/2{[c-(ath + bt)]/t]} P(c | h, ~subsidy) = N(afh + bf, f2)(c) = 1/ (f21/2) e –1/2{[c-(afh + bf)]/t]} S型函数(Sigmoid function) p(buys | Cost = c) = 1 / {1 + exp[-2(-u+)/ ]}
2021-11-21 16:56:17 1.55MB 贝叶斯
1