Filter löschen
Filter löschen

Find closest value in array

2.199 Ansichten (letzte 30 Tage)
Chiranjibi
Chiranjibi am 25 Aug. 2014
Bearbeitet: John D'Errico am 10 Jul. 2024 um 18:40
I have two vector(which are time stamps) like,
V N
1375471092848936 1375473384440853
1375473388165900 1375471277856598
1375471320476780 1375473388165900
1375473388947681 1375471322465961
1375473392527002 1375471335206288
.................. ..................
My goal is to find closest time in N with respect to V (i.e. find time in N which is nearly equal with V). My frame is W = 1e4, furthermore V should lies between N-W and N+W. So how do I get closest time through MATLAB? Any help would be appreciated.
Thanks

Akzeptierte Antwort

Joe S
Joe S am 10 Sep. 2018
Bearbeitet: MathWorks Support Team am 27 Nov. 2018
To compute the closest value in a vector “N” for each element of “V”, try the following code with example vectors “N” and “V”:
V = randi(10,[5 1])
N = randi(10,[5 1])
A = repmat(N,[1 length(V)])
[minValue,closestIndex] = min(abs(A-V))
closestValue = N(closestIndex)
Note that if there is a tie for the minimum value in each column, MATLAB chooses the first element in the column.
  6 Kommentare
David
David am 5 Jul. 2023
This just saved my night!
For anybody juse searching the index, faster variant:
[~,closestIndex] = min(abs(N-V));
Daniel Ryan
Daniel Ryan am 2 Jul. 2024 um 19:07
Great variant!

Melden Sie sich an, um zu kommentieren.

Weitere Antworten (4)

Andrew Reibold
Andrew Reibold am 25 Aug. 2014
Bearbeitet: Andrew Reibold am 25 Aug. 2014
This finds the value in N which is closest to the V value I am calling.
N = [1990 1998 2001 2004 2001]
V = [2000 2011 2010 2001 1998]
[c index] = min(abs(N-V(1)))
In this case Im looking for the closest value to 'V(1)' which is 2000. It should return the 3rd or 5th value of N which is 2001.
Note: 'index' is the index of the closest value. If two are the same, like in this example with two different '2001's, it will return the index of the first one.
  4 Kommentare
reetu hooda
reetu hooda am 17 Feb. 2018
if N is just a decimal number and it is to be searched in a matrix V(containing decimal numbers). how would the code change?
Image Analyst
Image Analyst am 17 Feb. 2018
reetu, if N is just a single number then you can do this
[minDistance, indexOfMin] = min(abs(V-N));

Melden Sie sich an, um zu kommentieren.


Image Analyst
Image Analyst am 25 Aug. 2014
How about this:
clc;
% Sample data
numberOfRows = 5;
V = rand(numberOfRows, 1)
N = rand(numberOfRows, 1)
% Find min distance
minDistance = inf;
for ni = 1 : numberOfRows
for vi = 1 : numberOfRows
distances(vi, ni) = abs(N(ni) - V(vi));
if distances(vi, ni) < minDistance
minNRow = ni;
minVRow = vi;
minDistance = distances(vi, ni);
end
end
end
% Report to command window:
distances
fprintf('Closest distance is %f which occurs between row %d of N and row %d of V\n',...
minDistance, minNRow, minVRow);
In the command window:
V =
0.5309
0.6544
0.4076
0.8200
0.7184
N =
0.9686
0.5313
0.3251
0.1056
0.6110
distances =
0.4378 0.0005 0.2057 0.4252 0.0801
0.3142 0.1231 0.3293 0.5488 0.0435
0.5610 0.1237 0.0825 0.3020 0.2033
0.1487 0.2886 0.4948 0.7144 0.2090
0.2503 0.1870 0.3932 0.6127 0.1074
Closest distance is 0.000470 which occurs between row 2 of N and row 1 of V
  3 Kommentare
Image Analyst
Image Analyst am 2 Nov. 2017
You can try this:
% Sample data
numberOfRows = 5;
V = rand(numberOfRows, 1)
N = rand(numberOfRows, 1)
% Find min distance
distances = pdist2(V, N)
[minDistance, index] = min(distances(:))
[minVRow, minNRow] = ind2sub(size(distances), index)
fprintf('The closest distance is %f which occurs between\nrow %d of V (%f) and\nrow %d of N (%f)\n',...
minDistance, minVRow, V(minVRow), minNRow, N(minNRow));
% Double-check / Prove it
V(minVRow) - N(minNRow)
Image Analyst
Image Analyst am 10 Nov. 2017
What's wrong with a for loop? And what is ni and vi?

Melden Sie sich an, um zu kommentieren.


John D'Errico
John D'Errico am 10 Jul. 2024 um 18:36
Bearbeitet: John D'Errico am 10 Jul. 2024 um 18:40
To be honest, the easiest way is to use knnsearch. It works well in one dimension, as you have here, and it should be quite efficient.
V = [1375471092848936; 1375473388165900; 1375471320476780; 1375473388947681; 1375473392527002];
N = [1375473384440853; 1375471277856598; 1375473388165900; 1375471322465961; 1375471335206288];
help knnsearch
KNNSEARCH Find K nearest neighbors. IDX = KNNSEARCH(X,Y) finds the nearest neighbor in X for each point in Y. X is an MX-by-N matrix and Y is an MY-by-N matrix. Rows of X and Y correspond to observations and columns correspond to variables. IDX is a column vector with MY rows. Each row in IDX contains the index of the nearest neighbor in X for the corresponding row in Y. [IDX, D] = KNNSEARCH(X,Y) returns a MY-by-1 vector D containing the distances between each row of Y and its closest point in X. [IDX, D]= KNNSEARCH(X,Y,'NAME1',VALUE1,...,'NAMEN',VALUEN) specifies optional argument name/value pairs: Name Value 'K' A positive integer, K, specifying the number of nearest neighbors in X to find for each point in Y. Default is 1. IDX and D are MY-by-K matrices. D sorts the distances in each row in ascending order. Each row in IDX contains the indices of K closest neighbors in X corresponding to the K smallest distances in D. 'NSMethod' Nearest neighbors search method. Value is either: 'kdtree' - Creates and uses a kd-tree to find nearest neighbors. 'kdtree' is only valid when the distance metric is one of the following metrics: - 'euclidean' - 'cityblock' - 'minkowski' - 'chebychev' 'exhaustive' - Uses the exhaustive search algorithm. The distance values from all the points in X to each point in Y are computed to find nearest neighbors. Default is 'kdtree' when the number of columns of X is not greater than 10, X is not sparse, and the distance metric is one of the above 4 metrics; otherwise, default is 'exhaustive'. 'IncludeTies' A logical value indicating whether KNNSEARCH will include all the neighbors whose distance values are equal to the Kth smallest distance. Default is false. If the value is true, KNNSEARCH includes all these neighbors. In this case, IDX and D are MY-by-1 cell arrays. Each row in IDX and D contains a vector with at least K numeric numbers. D sorts the distances in each vector in ascending order. Each row in IDX contains the indices of the closest neighbors corresponding to these smallest distances in D. 'Distance' A string or a function handle specifying the distance metric. The value can be one of the following: 'euclidean' - Euclidean distance (default). 'seuclidean' - Standardized Euclidean distance. Each coordinate difference between X and a query point is scaled by dividing by a scale value S. The default value of S is the standard deviation computed from X, S=NANSTD(X). To specify another value for S, use the 'Scale' argument. 'fasteuclidean' - Euclidean distance computed by using an alternative algorithm that saves time. This faster algorithm can, in some cases, reduce accuracy. 'fastseuclidean' - Standardized Euclidean distance computed by using an alternative algorithm that saves time. This faster algorithm can, in some cases, reduce accuracy. 'cityblock' - City Block distance. 'chebychev' - Chebychev distance (maximum coordinate difference). 'minkowski' - Minkowski distance. The default exponent is 2. To specify a different exponent, use the 'P' argument. 'mahalanobis' - Mahalanobis distance, computed using a positive definite covariance matrix C. The default value of C is the sample covariance matrix of X, as computed by NANCOV(X). To specify another value for C, use the 'Cov' argument. 'cosine' - One minus the cosine of the included angle between observations (treated as vectors). 'correlation' - One minus the sample linear correlation between observations (treated as sequences of values). 'spearman' - One minus the sample Spearman's rank correlation between observations (treated as sequences of values). 'hamming' - Hamming distance, percentage of coordinates that differ. 'jaccard' - One minus the Jaccard coefficient, the percentage of nonzero coordinates that differ. function - A distance function specified using @ (for example @DISTFUN). A distance function must be of the form function D2 = DISTFUN(ZI, ZJ), taking as arguments a 1-by-N vector ZI containing a single row of X or Y, an M2-by-N matrix ZJ containing multiple rows of X or Y, and returning an M2-by-1 vector of distances D2, whose Jth element is the distance between the observations ZI and ZJ(J,:). 'P' A positive scalar indicating the exponent of Minkowski distance. This argument is only valid when 'Distance' is 'minkowski'. Default is 2. 'Cov' A positive definite matrix indicating the covariance matrix when computing the Mahalanobis distance. This argument is only valid when 'Distance' is 'mahalanobis'. Default is NANCOV(X). 'Scale' A vector S containing non-negative values, with length equal to the number of columns in X. Each coordinate difference between X and a query point is scaled by the corresponding element of S. This argument is only valid when 'Distance' is 'seuclidean'. Default is NANSTD(X). 'BucketSize' The maximum number of data points in the leaf node of the kd-tree (default is 50). This argument is only meaningful when kd-tree is used for finding nearest neighbors. 'SortIndices' A flag to indicate if output distances and the corresponding indices should be sorted in the order of distances ranging from the smallest to the largest distance. Default is true. 'CacheSize' A positive scalar or 'maximal'. The default is 1e3. This argument is only meaningful when the alternative algorithm of computing Euclidean distance is used which requires an intermediate matrix (when 'NSMethod' is 'exhaustive', and 'Distance' is one of {'fasteuclidean','fastseuclidean'}). If numeric, this argument specifies the cache size in megabytes (MB) to allocate for an intermediate matrix. If 'maximal', knnsearch attempts to allocate enough memory for an entire intermediate matrix whose size is MX-by-MY (MX is the number of rows of the input data X, and MY is the number of rows of the input data Y). 'CacheSize' does not have to be large enough for an entire intermediate matrix, but must be at least large enough to hold an MX-by-1 vector. Otherwise, the regular algorithm of computing Euclidean distance will be used instead. If the specified cache size exceeds the available memory, MATLAB issues an out-of-memory error. Example: % Find 2 nearest neighbors in X and the corresponding values to each % point in Y using the distance metric 'cityblock' X = randn(100,5); Y = randn(25, 5); [idx, dist] = knnsearch(X,Y,'dist','cityblock','k',2); See also CREATENS, ExhaustiveSearcher, KDTreeSearcher, RANGESEARCH. Documentation for knnsearch doc knnsearch Other uses of knnsearch ExhaustiveSearcher/knnsearch KDTreeSearcher/knnsearch gpuArray/knnsearch tall/knnsearch hnswSearcher/knnsearch textanalytics/knnsearch
ids = knnsearch(N,V)
ids = 5x1
2 3 4 3 3
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
There is no need to look at differences, compute absolute values, etc. Just use the tool that is designed to solve your problem directly.

Korosh Agha Mohammad Ghasemi
Korosh Agha Mohammad Ghasemi am 25 Jun. 2024 um 16:39
Verschoben: Voss am 25 Jun. 2024 um 17:02
% Example V and N vectors
V = [1375471092848936; 1375473388165900; 1375471320476780; 1375473388947681; 1375473392527002];
N = [1375473384440853; 1375471277856598; 1375473388165900; 1375471322465961; 1375471335206288];
W = 1e4; % Window size
% Initialize the closest times array
closest_times = zeros(size(V));
% Find the closest time in N for each time in V within the window
for i = 1:length(V)
% Calculate the absolute differences
diffs = abs(N - V(i));
% Find the indices within the window
within_window = diffs <= W;
if any(within_window)
% Find the closest time
[~, closest_idx] = min(diffs(within_window));
% Get the actual index in N
closest_times(i) = N(find(within_window, closest_idx, 'first'));
else
% No times within the window
closest_times(i) = NaN;
end
end
% Display the closest times
disp('Closest times:');
disp(closest_times);

Kategorien

Mehr zu Characters and Strings finden Sie in Help Center und File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by