• nearest_hospital
• Find the nearest value to the given one. You are given a list of values as set form and a value for which you need to find the nearest one. （需要找到最接近目标值的值） For example, we have the ...

问题描述：
Find the nearest value to the given one.
You are given a list of values as set form and a value for which you need to find the nearest one.
（需要找到最接近目标值的值）
For example, we have the following set of numbers: 4, 7, 10, 11, 12, 17, and we need to find the nearest value to the number 9. If we sort this set in the ascending order, then to the left of number 9 will be number 7 and to the right - will be number 10. But 10 is closer than 7, which means that the correct answer is 10.
A few clarifications:
If 2 numbers are at the same distance, you need to choose the smallest one;The set of numbers is always non-empty, i.e. the size is >=1;The given value can be in this set, which means that it’s the answer;The set can contain both positive and negative numbers, but they are always integers;The set isn’t sorted and consists of unique numbers.
Input: Two arguments. A list of values in the set form. The sought value is an int.
Output: Int.
def nearest_value(values: set, one: int) -> int:
return None

if __name__ == '__main__':
print("Example:")
print(nearest_value({4, 7, 10, 11, 12, 17}, 9))

# These "asserts" are used for self-checking and not for an auto-testing
assert nearest_value({4, 7, 10, 11, 12, 17}, 9) == 10
assert nearest_value({4, 7, 10, 11, 12, 17}, 8) == 7
assert nearest_value({4, 8, 10, 11, 12, 17}, 9) == 8
assert nearest_value({4, 9, 10, 11, 12, 17}, 9) == 9
assert nearest_value({4, 7, 10, 11, 12, 17}, 0) == 4
assert nearest_value({4, 7, 10, 11, 12, 17}, 100) == 17
assert nearest_value({5, 10, 8, 12, 89, 100}, 7) == 8
assert nearest_value({-1, 2, 3}, 0) == -1
print("Coding complete? Click 'Check' to earn cool rewards!")

由于集合（set）是一个无序的不重复元素序列。所以我的想法是把集合变成列表，把目标数字加入到列表，然后重新排序。找到加入数字的索引，利用索引找到相邻的数字，然后再比较。
def nearest_value(values: set, one: int) -> int:
new=list(values)
new.append(one)
new.sort()
p=new.index(one)
if new[p]==new[-1]:
return new[-2]
elif p==0:
return new[1]
else:
if abs(new[p-1]-one)<=abs(new[p+1]-one):
return new[p-1]
else:
return new[p+1] 
其他解决方案：
def nearest_value(values: set, one: int) -> int:
return min(values, key=lambda n: (abs(one - n), n))
果然能靠一行代码就行，学习一下lamda和min。
min(iterable, *[, key, default])
min(arg1, arg2, *args[, key])
返回可迭代对象中最小的元素，或者返回两个及以上实参中最小的。
如果只提供了一个位置参数，它必须是可迭代的，返回可迭代对象中最小的元素；如果提供了两个及以上的位置参数，则返回最小的位置参数。
有两个可选只能用关键字的实参。key 实参指定排序函数用的参数，如传给list.sort()的。default 实参是当可迭代对象为空时返回的值。如果可迭代对象为空，并且没有给 default ，则会触发ValueError。
如果有多个最小元素，则此函数将返回第一个找到的。这和其他稳定排序工具如 sorted(iterable, key=keyfunc)[0] 和 heapq.nsmallest(1, iterable, key=keyfunc) 保持一致。
lambda 表达式（有时称为 lambda 构型）被用于创建匿名函数。 表达式 lambda parameters: expression 会产生一个函数对象 。 该未命名对象的行为类似于用以下方式定义的函数:
def <lambda>(parameters):
return expression
这道题让我大概搞懂了lambda和min（max），果然还是要做题，有了自己思考，再看看别人的更优解。
展开全文
• Your program must replace each zero element in the matrix with the nearest non-zero one. If there are two or more nearest non-zeroes, the zero must be left in place. Constraints 1 , 0 Input ...
• Nearest Neighbor Pattern Classification. 1967. 欢迎下载。
• For other examples, the nearest common ancestor of nodes 2 and 3 is node 10, the nearest common ancestor of nodes 6 and 13 is node 8, and the nearest common ancestor of nodes 4 and 12 is node 4....
• K-nearest-neighbors
• 来源：... ''' A nearest neighbor learning algorithm example using TensorFlow library. This example is using the MNIST database of handwritten digits (http...
来源：https://github.com/aymericdamien/TensorFlow-Examples
'''
A nearest neighbor learning algorithm example using TensorFlow library.
This example is using the MNIST database of handwritten digits
(http://yann.lecun.com/exdb/mnist/)

Author: Aymeric Damien
Project: https://github.com/aymericdamien/TensorFlow-Examples/
'''

from __future__ import print_function

import numpy as np
import tensorflow as tf

# Import MNIST data
from tensorflow.examples.tutorials.mnist import input_data

# In this example, we limit mnist data
Xtr, Ytr = mnist.train.next_batch(5000) #5000 for training (nn candidates)
Xte, Yte = mnist.test.next_batch(200) #200 for testing

# tf Graph Input
xtr = tf.placeholder("float", [None, 784])
xte = tf.placeholder("float", [784])

# Nearest Neighbor calculation using L1 Distance
# Calculate L1 Distance
# Prediction: Get min distance index (Nearest neighbor)
pred = tf.arg_min(distance, 0)

accuracy = 0.

# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()

# Start training
with tf.Session() as sess:

# Run the initializer
sess.run(init)

# loop over test data
for i in range(len(Xte)):
# Get nearest neighbor
nn_index = sess.run(pred, feed_dict={xtr: Xtr, xte: Xte[i, :]})
# Get nearest neighbor class label and compare it to its true label
print("Test", i, "Prediction:", np.argmax(Ytr[nn_index]), \
"True Class:", np.argmax(Yte[i]))
# Calculate accuracy
if np.argmax(Ytr[nn_index]) == np.argmax(Yte[i]):
accuracy += 1./len(Xte)
print("Done!")
print("Accuracy:", accuracy)


展开全文
• Nearest feature line embedding for face hallucination
• Nearest Neighbor Algorithm 邻近算法(Nearest Neighbor)的思想实际上十分简单，就是将测试图片和储存起来的训练集一一进行相似度计算，计算出最相近的图片，这张图片的标签便是赋给测试图片的分类标签。 那么如何...
Nearest Neighbor Algorithm
邻近算法(Nearest Neighbor)的思想实际上十分简单，就是将测试图片和储存起来的训练集一一进行相似度计算，计算出最相近的图片，这张图片的标签便是赋给测试图片的分类标签。
那么如何比较两组数据之间的相似长度呢？
L1距离（曼哈顿距离 Manhattan distance）
import numpy as np
import _pickle as pickle

training_file = filename + "/data_batch_1"
testing_file = filename + "/test_batch"
with open(training_file,'rb') as f:
#data是个字典，key为data和labels
Xtr = data['data']
#Xtr是图片的像素组成，如10000张彩色32*32的图片，Xtr就是10000*3072，3072=32*32*3
Ytr = data['labels']
#Ytr是labels的数量，和图片数量相等，为10000
with open(testing_file,'rb') as f:
Xte = data['data']
#Xte = Xte[:100]
#计算快一点的话，就以前100个为例
Yte = data['labels']
#Yte = Yte[:100]
return Xtr,Ytr,Xte,Yte

class NearestNeighbor(object):
def __init__(self):
pass
def train(self,X,Y):
self.Xtr = X
self.Ytr = Y
def predict_L1(self,X):
num_test = X.shape[0]
#num_test是图片的数量，为10000，shape[0]返回的是行数
print("num_test:",num_test)
Ypred = np.zeros(num_test)
#Ypred是一个10000大小，储存预测出来的label的列表
for i in range(num_test):
distance = np.sum(np.abs(self.Xtr- X[i,:]),axis=1)
#计算距离，每一行进行减法
print("distance:",distance)
min_index = np.argmin(distance)
#找到距离最小值的索引，如最小值为50，索引为第5个图
print("min_index",min_index)
Ypred[i] = self.Ytr[min_index]
#根据索引找到训练集该索引的label，赋给测试集的该图
print("Ypred:",Ypred[i])
print(Ypred[i])
return  Ypred

Xtr, Ytr, Xte, Yte = load_CIFAR10("./cifar-10-batches-py")
#Xtr和Ytr是训练集,Xte和Yte是预测集
nn = NearestNeighbor()
nn.train(Xtr,Ytr)
Yte_predict = nn.predict_L1(Xte)
print("Xtr: ", Xtr.shape)
print(Yte_predict)
print(count)
print('accuracy: %f' % (np.mean(Yte_predict == Yte)))
#计算正确的所占百分比，mean计算百分比

L2距离（欧式距离）
KNN的原理就是当预测一个新的值x的时候，根据它距离最近的K个点是什么类别来判断x属于哪个类别。

Γ

d

(

I

1

,

I

2

)

=

∑

P

(

I

1

P

−

I

2

P

)

2

,

.

\Gamma d(I_1,I_2) =\sqrt{\sum_P(I_1^P-I_2^P)^{2}},.

那么该如何确定K取多少值好呢？答案是通过交叉验证（将样本数据按照一定比例，拆分出训练用的数据和验证用的数据，比如6：4拆分出部分训练数据和验证数据），从选取一个较小的K值开始，不断增加K的值，然后计算验证集合的方差，最终找到一个比较合适的K值。
举例：以电影分类作为例子，电影题材可分为爱情片，动作片等，那么爱情片有哪些特征？动作片有哪些特征呢？也就是说给定一部电影，怎么进行分类？这里假定将电影分为爱情片和动作片两类，如果一部电影中接吻镜头很多，打斗镜头较少，显然是属于爱情片，反之为动作片。有人曾根据电影中打斗动作和接吻动作数量进行评估，数据如下：  　给定一部电影数据（18，90）打斗镜头18个，接吻镜头90个，如何知道它是什么类型的呢？KNN是这样做的，首先计算未知电影与样本集中其他电影的距离（这里使用曼哈顿距离），数据如下：  现在我们按照距离的递增顺序排序，可以找到k个距离最近的电影，加入k=3,那么来看排序的前3个电影的类别，爱情片，爱情片，动作片，下面来进行投票，这部未知的电影爱情片2票，动作片1票，那么我们就认为这部电影属于爱情片。
展开全文
• 函数输入： X : source points dataset Y : destination points dataset 函数： 对于Y中的每一个点， 在X找出距其最近的k个点 （k 由参数中的 n_neighbors表示） ...nbrs = NearestNeighbors(n_neighbors=
想为数据集Y中的每一个点，在数据集X中找到距其（y）最近的k个点.
点的个数 k 由参数中的 n_neighbors表示距离指的是 欧几里得距离（Euclidean distance ）
函数输出：
indices ： 这k个最近的点的索引（X 的索引）distances ： 在所有X的points中，距离Y中的（每个）点最近的k个点的距离
函数表达式：
nbrs = NearestNeighbors(n_neighbors=2, algorithm='ball_tree').fit(X)
distances, indices = nbrs.kneighbors(Y)

详解举例：（先用NearestNeighbors函数计算，再求证）
from sklearn.neighbors import NearestNeighbors
import numpy as np

X = np.array([[-1, -1],
[-2, -1],
[-3, -2],
[1, 1],
[2, 1],
[3, 2]])

Y = np.array([[1, 5],
[3,3]])

nbrs = NearestNeighbors(n_neighbors=1, algorithm='ball_tree').fit(X)
distances, indices = nbrs.kneighbors(Y)

print(indices)
print(distances)

# [[5]
#  [5]]
# [[3.60555128]
#  [1.        ]]

# ---------- check ----------

def distEuclid(x, y):
distance= np.sqrt(np.sum(np.square(x-y)))
return distance

d = np.zeros((2,6), dtype=float)
for i in range(len(Y)):
for j in range(len(X)):
d[i,j] = distEuclid(X[j], Y[i])
print(d)

# [[6.32455532 6.70820393 8.06225775 4.         4.12310563 3.60555128]
#  [5.65685425 6.40312424 7.81024968 2.82842712 2.23606798 1.        ]]


展开全文
• matlab开发-KNearestNeighbors。程序在一组点内查找k-最近邻（knn）。
• K-Nearest-Neighbor-Imputation
• K-Nearest-Neighbors-Implementation
• Javascript_Nearest-to-100
• pattern on the basis of its nearest neighbors in a recorded data set is addressed from the point of view of Dempster-Shafer theory. Each neighbor of a sample to be classified is considered as an item ...
• K-Nearest Neighbor Hello readers, this is an in-depth discusssion about a powerful classification algorithm called K-Nearest Neighbor(KNN). I have tried my best for collecting the information so that ...
• set the problem of drawing lines to join the nearest neighbour pairs of any given set of points (x, y) that are mapped in two dimensions. There are three steps to the computing: compute the distance....
• Consensus of flocks under M-nearest-neighbor rules
• Face Hallucination via Re-Identification K-Nearest Neighbors
• KNN分类器之NearestNeighbors详解及实践.pdf
• FAST K-NEAREST NEIGHBORS SEARCH， Simple but very fast algorithm for nearest neighbors search in 2D space.
• $cd nearest-points-algo-homework$ yarn 启动开发服务器： $yarn dev 运行测试（将-s -1设置为始终显示运行时间）：$ yarn test -s -1 推荐的IDE设置 + 。 确保在设置中启用vetur.experimental....
• 前端项目-jquery-nearest,根据像素尺寸查找页面中最接近特定点或元素的元素
• For other examples, the nearest common ancestor of nodes 2 and 3 is node 10, the nearest common ancestor of nodes 6 and 13 is node 8, and the nearest common ancestor of nodes 4 and 12 is node 4....
• An improved K-nearest-neighbor algorithm for text categorization
• For other examples, the nearest common ancestor of nodes 2 and 3 is node 10, the nearest common ancestor of nodes 6 and 13 is node 8, and the nearest common ancestor of nodes 4 and 12 is node 4....
• NearestNeighbors.jl 精确和近似最近邻居搜索的数据结构。 我们有： 天真的搜索“树”，它使用蛮力查找最近的邻居并且不缓存任何内容 每个搜索结构的主要API是： 树木建造： t = NaiveNeighborTree(X, Euclidean...
• A code for implementing Nearest Neighbor Interpolation
• Noisy data elimination using mutual k-nearest neighbor for classification mining

...