• 算法时间复杂度计算Just like writing your very first for loop, understanding time complexity is an integral milestone to learning how to write efficient complex programs. Think of it as having a ...


    Just like writing your very first for loop, understanding time complexity is an integral milestone to learning how to write efficient complex programs. Think of it as having a superpower that allows you to know exactly what type of program might be the most efficient in a particular situation — before even running a single line of code.

    就像编写您的第一个for循环一样,了解时间复杂度是学习如何编写有效的复杂程序的重要里程碑。 可以认为它具有超强功能,可以使您确切地知道哪种类型的程序在特定情况下可能是最高效的,甚至可以只运行一行代码。

    The fundamental concepts of complexity analysis are well worth studying. You’ll be able to better understand how the code you’re writing will interact with the program’s input, and as a result, you’ll spend a lot less wasted time writing slow and problematic code.

    复杂性分析的基本概念非常值得研究。 您将能够更好地了解所编写的代码将如何与程序的输入进行交互,因此,您将花费更少的时间来编写缓慢而有问题的代码。

    It won’t take long to go over all you need to know in order to start writing more efficient programs — in fact, we can do it in about fifteen minutes. You can go grab a coffee right now (or tea, if that’s your thing) and I’ll take you through it before your coffee break is over. Go ahead, I’ll wait.

    为了开始编写更高效的程序,花很长时间可以遍历所有您需要了解的知识-实际上,我们可以在大约十五分钟内完成。 您现在可以去喝杯咖啡(或茶,如果那是您的事),我会在您的咖啡休息时间结束之前帮您煮一杯。 来吧,我等。

    All set? Let’s do it!

    可以了,好了? 我们开始做吧!

    无论如何,“时间复杂度”是什么? (What is “time complexity” anyway?)

    The time complexity of an algorithm is an approximation of how long that algorithm will take to process some input. It describes the efficiency of the algorithm by the magnitude of its operations. This is different than the number of times an operation repeats. I’ll expand on that later. Generally, the fewer operations the algorithm has, the faster it will be.

    算法的时间复杂度是该算法处理某些输入将花费多长时间的近似值 。 它通过运算的大小来描述算法的效率。 这与操作重复的次数不同。 我将在稍后进行扩展。 通常,算法执行的运算越少,运算速度就越快。

    We write about time complexity using Big O notation, which looks something like O(n). There’s rather a lot of math involved in its formal definition, but informally we can say that Big O notation gives us our algorithm’s approximate run time in the worst case, or in other words, its upper bound. It is inherently relative and comparative.

    我们使用Big O表示法来编写时间复杂度,看起来像O ( n )。 它的正式定义涉及很多数学运算,但非正式地,我们可以说Big O符号为我们提供了算法在最坏情况下的近似运行时间,或者换句话说,它的上限。 它本质上是相对的和比较的。

    We’re describing the algorithm’s efficiency relative to the increasing size of its input data, n. If the input is a string, then n is the length of the string. If it’s a list of integers, n is the length of the list.

    我们正在描述算法相对于输入数据n的大小增加的效率。 如果输入是字符串,则n是字符串的长度。 如果它是整数列表,则n是列表的长度。

    It’s easiest to picture what Big O notation represents with a graph:

    用图形描绘Big O表示法最简单:

    Here are the main important points to remember as you read the rest of this article:


    • Time complexity is an approximation

    • An algorithm’s time complexity approximates its worst case run time


    确定时间复杂度 (Determining time complexity)

    There are different classes of complexity that we can use to quickly understand an algorithm. I’ll illustrate some of these classes using nested loops and other examples.

    我们可以使用不同类别的复杂性来快速了解算法。 我将使用嵌套循环和其他示例来说明其中一些类。

    多项式时间复杂度 (Polynomial time complexity)

    A polynomial, from the Greek poly meaning “many,” and Latin nomen meaning “name,” describes an expression comprised of constant variables, and addition, multiplication, and exponentiation to a non-negative integer power. That’s a super math-y way to say that it contains variables usually denoted by letters, and symbols that look like these:

    多项式 ,由希腊语poly表示“许多”,而拉丁语nomen表示“ name”,描述了一个表达式,该表达式包含常量变量以及加,乘和乘幂到非负整数幂。 这是一种超级数学的说法,它包含通常用字母表示的变量和如下所示的符号:

    The below classes describe polynomial algorithms. Some have food examples.

    以下类描述多项式算法。 有些有食物的例子。

    不变 (Constant)

    A constant time algorithm doesn’t change its running time in response to the input data. No matter the size of the data it receives, the algorithm takes the same amount of time to run. We denote this as a time complexity of O(1).

    恒定时间算法不会响应输入数据而更改其运行时间。 无论接收到的数据大小如何,该算法都会花费相同的时间来运行。 我们将其表示为O (1)的时间复杂度。

    Here’s one example of a constant algorithm that takes the first item in a slice.


    func takeCupcake(cupcakes []int) int {
    	return cupcakes[0]

    With this contant-time algorithm, no matter how many cupcakes are on offer, you just get the first one. Oh well. Flavours are overrated anyway.

    使用这种竞争时间算法,无论提供多少杯形蛋糕,您都只会得到第一个。 那好吧。 无论如何,口味被高估了。

    线性的 (Linear)

    The running duration of a linear algorithm is constant. It will process the input in n number of operations. This is often the best possible (most efficient) case for time complexity where all the data must be examined.

    线性算法的运行持续时间是恒定的。 它将以n个操作处理输入。 对于时间复杂度,这通常是最好的(最有效的)情况,其中必须检查所有数据。

    Here’s an example of code with time complexity of O(n):

    这是时间复杂度为O ( n )的代码示例:

    func eatChips(bowlOfChips int) {
    	for chip := 0; chip <= bowlOfChips; chip++ {
    		// dip chip

    Here’s another example of code with time complexity of O(n):

    这是时间复杂度为O ( n )的另一个代码示例:

    func eatChips(bowlOfChips int) {
    	for chip := 0; chip <= bowlOfChips; chip++ {
    		// double dip chip

    It doesn’t matter whether the code inside the loop executes once, twice, or any number of times. Both these loops process the input by a constant factor of n, and thus can be described as linear.

    循环内的代码执行一次,两次或任意多次都没有关系。 这两个回路都以常数n处理输入,因此可以描述为线性。

    二次方的 (Quadratic)

    Now here’s an example of code with time complexity of O(n2):

    现在这是一个时间复杂度为O ( n 2)的代码示例:

    func pizzaDelivery(pizzas int) {
    	for pizza := 0; pizza <= pizzas; pizza++ {
    		// slice pizza
    		for slice := 0; slice <= pizza; slice++ {
    			// eat slice of pizza

    Because there are two nested loops, or nested linear operations, the algorithm process the input n2times.


    立方体 (Cubic)

    Extending on the previous example, this code with three nested loops has time complexity of O(n3):

    扩展前面的示例,此代码具有三个嵌套循环,其时间复杂度为O ( n 3):

    func pizzaDelivery(boxesDelivered int) {
    	for pizzaBox := 0; pizzaBox <= boxesDelivered; pizzaBox++ {
    		// open box
    		for pizza := 0; pizza <= pizzaBox; pizza++ {
    			// slice pizza
    			for slice := 0; slice <= pizza; slice++ {
    				// eat slice of pizza

    对数 (Logarithmic)

    A logarithmic algorithm is one that reduces the size of the input at every step. We denote this time complexity as O(log n), where log, the logarithm function, is this shape:

    对数算法是一种在每一步都减小输入大小的算法。 我们将此时间复杂度表示为O (log n ),其中log (对数函数)为以下形状:

    One example of this is a binary search algorithm that finds the position of an element within a sorted array. Here’s how it would work, assuming we’re trying to find the element x:

    一个示例是二进制搜索算法 ,该算法可查找元素在排序数组中的位置。 假设我们试图找到元素x ,这是它的工作方式:

    1. If x matches the middle element m of the array, return the position of m.


    2. If x doesn’t match m, see if m is larger or smaller than x. If larger, discard all array items greater than m. If smaller, discard all array items smaller than m.

      如果xm不匹配,请查看m是大于还是小于x。 如果更大,则丢弃所有大于m的数组项 如果较小,则丢弃所有小于m的数组项

    3. Continue by repeating steps 1 and 2 on the remaining array until x is found.


    I find the clearest analogy for understanding binary search is imagining the process of locating a book in a bookstore aisle. If the books are organized by author’s last name and you want to find “Terry Pratchett,” you know you need to look for the “P” section.

    我发现理解二进制搜索最清晰的类比是想像在书店过道中查找书籍的过程。 如果这些书是按作者的姓氏来组织的,并且您想查找“ Terry Pratchett”,则您需要查找“ P”部分。

    You can approach the shelf at any point along the aisle and look at the author’s last name there. If you’re looking at a book by Neil Gaiman, you know you can ignore all the rest of the books to your left, since no letters that come before “G” in the alphabet happen to be “P.” You would then move down the aisle to the right any amount, and repeat this process until you’ve found the Terry Pratchett section, which should be rather sizable if you’re at any decent bookstore, because wow did he write a lot of books.

    您可以在过道的任何位置接近书架,并在此处查看作者的姓氏。 如果您正在看尼尔·盖曼(Neil Gaiman)的书,那么您知道可以忽略左侧的所有其他书,因为字母表中“ G”之前的字母都不是“ P”。 然后,您可以将走道向右下移任意数量,并重复此过程,直到找到“特里·普拉切特”部分为止,如果您在任何一家不错的书店中,该部分都应该足够大,因为哇,他写了很多书吗。

    Quasilinear (Quasilinear)

    Often seen with sorting algorithms, the time complexity O(n log n) can describe a data structure where each operation takes O(log n) time. One example of this is quick sort, a divide-and-conquer algorithm.

    通常在排序算法中可以看到,时间复杂度O ( n log n )可以描述每个操作花费O (log n )时间的数据结构。 一个例子就是快速排序 ,即分而治之算法。

    Quick sort works by dividing up an unsorted array into smaller chunks that are easier to process. It sorts the sub-arrays, and thus the whole array. Think about it like trying to put a deck of cards in order. It’s faster if you split up the cards and get five friends to help you.

    快速排序通过将未排序的数组划分为更易于处理的较小块而起作用。 它对子数组进行排序,从而对整个数组进行排序。 考虑一下它,就像尝试整理一副纸牌一样。 如果您分拆卡片并得到五个朋友来帮助您,则速度会更快。

    非多项式时间复杂度 (Non-polynomial time complexity)

    The below classes of algorithms are non-polynomial.


    阶乘 (Factorial)

    An algorithm with time complexity O(n!) often iterates through all permutations of the input elements. One common example is a brute-force search, seen in the traveling salesman problem. It tries to find the least costly path between a number of points by enumerating all possible permutations and finding the ones with the lowest cost.

    时间复杂度为O ( n !)的算法通常会遍历输入元素的所有排列。 一个常见的例子是在旅行推销员问题中发现的蛮力搜索 。 它试图通过枚举所有可能的排列并找到成本最低的排列来找到多个点之间最便宜的路径。

    指数的 (Exponential)

    An exponential algorithm often also iterates through all subsets of the input elements. It is denoted O(2n) and is often seen in brute-force algorithms. It is similar to factorial time except in its rate of growth, which, as you may not be surprised to hear, is exponential. The larger the data set, the more steep the curve becomes.

    指数算法通常还会迭代输入元素的所有子集。 它表示为O (2 n ),通常在蛮力算法中看到。 它与阶乘时间相似,不同之处在于其增长率,如您可能并不惊讶地看到的,它是指数级的。 数据集越大,曲线越陡峭。

    In cryptography, a brute-force attack may systematically check all possible elements of a password by iterating through subsets. Using an exponential algorithm to do this, it becomes incredibly resource-expensive to brute-force crack a long password versus a shorter one. This is one reason that a long password is considered more secure than a shorter one.

    在密码术中,暴力攻击可以通过遍历子集来系统地检查密码的所有可能元素。 使用指数算法来做到这一点,用暴力破解长密码而不是短密码就变得非常耗费资源。 这是长密码比短密码更安全的原因之一。

    There are further time complexity classes less commonly seen that I won’t cover here, but you can read about these and find examples in this handy table.


    递归时间复杂度 (Recursion time complexity)

    As I described in my article explaining recursion using apple pie, a recursive function calls itself under specified conditions. Its time complexity depends on how many times the function is called and the time complexity of a single function call. In other words, it’s the product of the number of times the function runs and a single execution’s time complexity.

    如我在解释使用Apple Pie进行递归的文章中所述,递归函数在指定条件下会自行调用。 它的时间复杂度取决于调用该函数的次数以及单个函数调用的时间复杂度。 换句话说,它是函数运行次数与单次执行时间复杂度的乘积。

    Here’s a recursive function that eats pies until no pies are left:


    func eatPies(pies int) int {
    	if pies == 0 {
    		return pies
    	return eatPies(pies - 1)

    The time complexity of a single execution is constant. No matter how many pies are input, the program will do the same thing: check to see if the input is 0. If so, return, and if not, call itself with one fewer pie.

    单个执行的时间复杂度是恒定的。 不管输入多少个派,该程序都会做同样的事情:检查输入是否为0。如果是,则返回,否则返回一个更少的派。

    The initial number of pies could be any number, and we need to process all of them, so we can describe the input as n. Thus, the time complexity of this recursive function is the product O(n).

    派的初始数量可以是任意数量,我们需要处理所有派,因此我们可以将输入描述为n 。 因此,该递归函数的时间复杂度为乘积O ( n )。

    最坏情况下的时间复杂度 (Worst case time complexity)

    So far, we’ve talked about the time complexity of a few nested loops and some code examples. Most algorithms, however, are built from many combinations of these. How do we determine the time complexity of an algorithm containing many of these elements strung together?

    到目前为止,我们已经讨论了一些嵌套循环和一些代码示例的时间复杂性。 但是,大多数算法是根据这些算法的许多组合构建的。 我们如何确定包含许多这些元素的算法的时间复杂度?

    Easy. We can describe the total time complexity of the algorithm by finding the largest complexity among all of its parts. This is because the slowest part of the code is the bottleneck, and time complexity is concerned with describing the worst case for the algorithm’s run time.

    简单。 我们可以通过在算法所有部分中找到最大的复杂度来描述该算法的总时间复杂度。 这是因为代码最慢的部分是瓶颈,并且时间复杂度与描述算法运行时间的最坏情况有关。

    Say we have a program for an office party. If our program looks like this:

    假设我们有一个办公室聚会的程序。 如果我们的程序如下所示:

    package main
    import "fmt"
    func takeCupcake(cupcakes []int) int {
    	fmt.Println("Have cupcake number",cupcakes[0])
    	return cupcakes[0]
    func eatChips(bowlOfChips int) {
    	fmt.Println("Have some chips!")
    	for chip := 0; chip <= bowlOfChips; chip++ {
    		// dip chip
    	fmt.Println("No more chips.")
    func pizzaDelivery(boxesDelivered int) {
    	fmt.Println("Pizza is here!")
    	for pizzaBox := 0; pizzaBox <= boxesDelivered; pizzaBox++ {
    		// open box
    		for pizza := 0; pizza <= pizzaBox; pizza++ {
    			// slice pizza
    			for slice := 0; slice <= pizza; slice++ {
    				// eat slice of pizza
    	fmt.Println("Pizza is gone.")
    func eatPies(pies int) int {
    	if pies == 0 {
    		fmt.Println("Someone ate all the pies!")
    		return pies
    	fmt.Println("Eating pie...")
    	return eatPies(pies - 1)
    func main() {
    	takeCupcake([]int{1, 2, 3})
    	fmt.Println("Food gone. Back to work!")

    We can describe the time complexity of all the code by the complexity of its most complex part. This program is made up of functions we’ve already seen, with the following time complexity classes:

    我们可以通过其最复杂部分的复杂性来描述所有代码的时间复杂性。 该程序由我们已经看到的函数组成,具有以下时间复杂度类:

    To describe the time complexity of the entire office party program, we choose the worst case. This program would have the time complexity O(n3).

    为了描述整个办公室聚会程序的时间复杂性,我们选择最坏的情况。 该程序的时间复杂度为O ( n 3)。

    Here’s the office party soundtrack, just for fun.


    Have cupcake number 1
    Have some chips!
    No more chips.
    Pizza is here!
    Pizza is gone.
    Eating pie...
    Eating pie...
    Eating pie...
    Someone ate all the pies!
    Food gone. Back to work!

    P vs NP,NP完全和NP困难 (P vs NP, NP-complete, and NP-hard)

    You may come across these terms in your explorations of time complexity. Informally, P (for Polynomial time), is a class of problems that is quick to solve. NP, for Nondeterministic Polynomial time, is a class of problems where the answer can be quickly verified in polynomial time. NP encompasses P, but also another class of problems called NP-complete, for which no fast solution is known. Outside of NP, but still including NP-complete, is yet another class called NP-hard, which includes problems that no one has been able to verifiably solve with polynomial algorithms.

    在探索时间复杂度时,您可能会遇到这些术语。 非正式地, P (对于多项式时间)是一类可以快速解决的问题。 对于不确定性多项式时间, NP是一类问题,可以在多项式时间中快速验证答案。 NP包含P,但也包含另一类称为NP-complete的问题,对此尚无快速解决方案。 在NP之外,但仍然包括NP-complete,是另一类称为NP-hard ,它包括没有人能够使用多项式算法来验证地解决的问题。

    P versus NP is an unsolved, open question in computer science.


    Anyway, you don’t generally need to know about NP and NP-hard problems to begin taking advantage of understanding time complexity. They’re a whole other Pandora’s box.

    无论如何,您通常不需要了解NP和NP难题即可开始了解时间复杂度。 他们是潘多拉盒子的另一个盒子。

    在编写代码之前估算算法的效率 (Approximate the efficiency of an algorithm before you write the code)

    So far, we’ve identified some different time complexity classes and how we might determine which one an algorithm falls into. So how does this help us before we’ve written any code to evaluate?

    到目前为止,我们已经确定了一些不同的时间复杂度类别以及如何确定算法属于哪种类别。 那么,在编写任何代码进行评估之前,这对我们有什么帮助?

    By combining a little knowledge of time complexity with an awareness of the size of our input data, we can take a guess at an efficient algorithm for processing our data within a given time constraint. We can base our estimation on the fact that a modern computer can perform some hundreds of millions of operations in a second. The following table from the Competitive Programmer’s Handbook offers some estimates on required time complexity to process the respective input size in a time limit of one second.

    通过将对时间复杂性的一点了解与对我们输入数据大小的了解相结合,我们可以猜测一种在给定时间约束内处理数据的有效算法。 我们可以基于以下事实进行估算:现代计算机可以在一秒钟内执行数亿个操作。 下表来自《竞争程序员手册》,提供了一些估计的时间复杂度,以在1秒的时限内处理相应的输入大小。

    Keep in mind that time complexity is an approximation, and not a guarantee. We can save a lot of time and effort by immediately ruling out algorithm designs that are unlikely to suit our constraints, but we must also consider that Big O notation doesn’t account for constant factors. Here’s some code to illustrate.

    请记住,时间复杂度只是近似值,而不是保证值。 通过立即排除不太可能满足约束条件的算法设计,我们可以节省大量时间和精力,但是我们还必须考虑到Big O表示法并不能说明恒定因素 。 这是一些代码来说明。

    The following two algorithms both have O(n) time complexity.

    以下两种算法都具有O ( n )时间复杂度。

    func makeCoffee(scoops int) {
    	for scoop := 0; scoop <= scoops; scoop++ {
    		// add instant coffee
    func makeStrongCoffee(scoops int) {
    	for scoop := 0; scoop <= 3*scoops; scoop++ {
    		// add instant coffee

    The first function makes a cup of coffee with the number of scoops we ask for. The second function also makes a cup of coffee, but it triples the number of scoops we ask for. To see an illustrative example, let’s ask both these functions for a cup of coffee with a million scoops.

    第一个功能是按我们要求的勺子量杯咖啡。 第二个功能还可以煮一杯咖啡,但是它使我们要求的勺数增加了三倍。 为了看一个说明性的例子,让我们问一下这两个功能是否要喝一杯百万勺的咖啡。

    Here’s the output of the Go test:


    Benchmark_makeCoffee-4          1000000000             0.29 ns/op
    Benchmark_makeStrongCoffee-4    1000000000             0.86 ns/op

    Our first function, makeCoffee, completed in an average 0.29 nanoseconds. Our second function, makeStrongCoffee, completed in an average of 0.86 nanoseconds. While those may both seem like pretty small numbers, consider that the stronger coffee took nearly three times longer to make. This should make sense intuitively, since we asked it to triple the scoops. Big O notation alone wouldn’t tell you this, since the constant factor of the tripled scoops isn’t accounted for.

    我们的第一个函数makeCoffee平均在0.29纳秒内完成。 我们的第二个功能makeStrongCoffee平均在0.86纳秒内完成。 虽然这两个数字似乎都很少,但考虑到浓咖啡的制作时间却要长将近三倍。 从直觉上讲,这应该是有意义的,因为我们要求它将勺数增加三倍。 单单使用大O表示法就无法告诉您这一点,因为没有考虑到三倍瓢的常数因素。

    改善现有代码的时间复杂度 (Improve time complexity of existing code)

    Becoming familiar with time complexity gives us the opportunity to write code, or refactor code, to be more efficient. To illustrate, I’ll give a concrete example of one way we can refactor a bit of code to improve its time complexity.

    熟悉时间复杂度使我们有机会编写代码或重构代码,从而提高效率。 为了说明这一点,我将给出一个具体示例,说明我们可以重构一些代码以提高其时间复杂度的一种方法。

    Let’s say a bunch of people at the office want some pie. Some people want pie more than others. The amount that everyone wants some pie is represented by an int > 0:

    假设一群人在办公室要些馅饼。 有些人比其他人更想要馅饼。 每个人想要一些馅饼的数量用一个int > 0表示:

    diners := []int{2, 88, 87, 16, 42, 10, 34, 1, 43, 56}

    Unfortunately, we’re bootstrapped and there are only three forks to go around. Since we’re a cooperative bunch, the three people who want pie the most will receive the forks to eat it with. Even though they’ve all agreed on this, no one seems to want to sort themselves out and line up in an orderly fashion, so we’ll have to make do with everybody jumbled about.

    不幸的是,我们被引导了,只有三把叉子可以走了。 由于我们是一个合作社,所以最想吃馅饼的三个人会收到叉子来一起吃。 即使他们都同意这一点,似乎没有人愿意整理自己并以有序的方式排队,因此我们必须对每个混乱的人都做些努力。

    Without sorting the list of diners, return the three largest integers in the slice.


    Here’s a function that solves this problem and has O(n2) time complexity:

    这是一个解决此问题并具有O ( n 2)时间复杂度的函数:

    func giveForks(diners []int) []int {
    	// make a slice to store diners who will receive forks
    	var withForks []int
    	// loop over three forks
    	for i := 1; i <= 3; i++ {
    		// variables to keep track of the highest integer and where it is
    		var max, maxIndex int
    		// loop over the diners slice
    		for n := range diners {
    			// if this integer is higher than max, update max and maxIndex
    			if diners[n] > max {
    				max = diners[n]
    				maxIndex = n
    		// remove the highest integer from the diners slice for the next loop
    		diners = append(diners[:maxIndex], diners[maxIndex+1:]...)
    		// keep track of who gets a fork
    		withForks = append(withForks, max)
    	return withForks

    This program works, and eventually returns diners [88 87 56]. Everyone gets a little impatient while it’s running though, since it takes rather a long time (about 120 nanoseconds) just to hand out three forks, and the pie’s getting cold. How could we improve it?

    该程序有效,最终返回了食客[88 87 56] 。 但是,每个人在运行时都会有些不耐烦,因为要花很长时间(大约120纳秒)才能派出三把叉子,馅饼变得越来越冷。 我们如何改善它?

    By thinking about our approach in a slightly different way, we can refactor this program to have O(n) time complexity:

    通过以稍微不同的方式考虑我们的方法,我们可以将该程序重构为O ( n )时间复杂度:

    func giveForks(diners []int) []int {
    	// make a slice to store diners who will receive forks
    	var withForks []int
    	// create variables for each fork
    	var first, second, third int
    	// loop over the diners
    	for i := range diners {
    		// assign the forks
    		if diners[i] > first {
    			third = second
    			second = first
    			first = diners[i]
    		} else if diners[i] > second {
    			third = second
    			second = diners[i]
    		} else if diners[i] > third {
    			third = diners[i]
    	// list the final result of who gets a fork
    	withForks = append(withForks, first, second, third)
    	return withForks

    Here’s how the new program works:


    Initially, diner 2 (the first in the list) is assigned the first fork. The other forks remain unassigned.

    最初,晚餐2 (列表中的first )被分配了first叉子。 其他货叉保持未分配状态。

    Then, diner 88 is assigned the first fork instead. Diner 2 gets the second one.

    然后,代餐者88被分配第一叉。 晚餐2获得second个。

    Diner 87 isn’t greater than first which is currently 88, but it is greater than 2 who has the second fork. So, the second fork goes to 87. Diner 2 gets the third fork.

    晚餐87不大于first ,目前为88 ,但second叉子大于2 。 因此, second前叉转到87 。 晚餐2获得third叉子。

    Continuing in this violent and rapid fork exchange, diner 16 is then assigned the third fork instead of 2, and so on.

    继续进行这种剧烈而又快速的分叉交换,然后为晚餐16分配了third分叉而不是2 ,依此类推。

    We can add a print statement in the loop to see how the fork assignments play out:


    0 0 0
    2 0 0
    88 2 0
    88 87 2
    88 87 16
    88 87 42
    88 87 42
    88 87 42
    88 87 42
    88 87 43
    [88 87 56]

    This program is much faster, and the whole epic struggle for fork domination is over in 47 nanoseconds.


    As you can see, with a little change in perspective and some refactoring, we’ve made this simple bit of code faster and more efficient.


    Well, it looks like our fifteen minute coffee break is up! I hope I’ve given you a comprehensive introduction to calculating time complexity. Time to get back to work, hopefully applying your new knowledge to write more effective code! Or maybe just sound smart at your next office party. :)

    好吧,看来我们15分钟的咖啡休息时间到了! 希望我已经给您全面介绍了计算时间复杂度的方法。 是时候恢复工作了,希望运用您的新知识来编写更有效的代码! 或者在您下一次办公室聚会上听起来很聪明。 :)

    资料来源 (Sources)

    “If I have seen further it is by standing on the shoulders of Giants.” –Isaac Newton, 1675
    “如果我看得更远,那就是站在巨人的肩膀上。” –艾萨克·牛顿(Isaac Newton),1675年
    1. Antti Laaksonen. Competitive Programmer’s Handbook (pdf), 2017

      Antti Laaksonen。 竞争程序员手册(pdf) 2017

    2. Wikipedia: Big O notation

      维基百科: 大O符号

    3. StackOverflow: What is a plain English explanation of “Big O” notation?

      StackOverflow: “ Big O”符号的简单英语解释是什么?

    4. Wikipedia: Polynomial

      维基百科: 多项式

    5. Wikipedia: NP-completeness

      维基百科: NP完整性

    6. Wikipedia: NP-hardness

      维基百科: NP硬度

    7. Desmos graph calculator


    Thanks for reading! If you found this post useful, please share it with someone else who might benefit from it too!

    谢谢阅读! 如果您发现此帖子有用,请与可能也从中受益的其他人分享!

    翻译自: https://www.freecodecamp.org/news/a-coffee-break-introduction-to-time-complexity-of-algorithms-64df7dd8338e/


  • 算法复杂度分为时间复杂度和空间复杂度。 时间复杂度用于度量算法执行的时间长短;而空间复杂度则是用于度量算法所需存储空间的大小。 目录 时间复杂度 1.时间频度 2.计算方法 3.分类 空间复杂度 算法时间...











    一  大O记号

    二  Ω记号

    三  Θ记号

    四  小o记号

    五  例子






      1. 一般情况下,算法的基本操作重复执行的次数是模块n的某一个函数f(n),因此,算法的时间复杂度记做:T(n)=O(f(n))

      2. 在计算时间复杂度的时候,先找出算法的基本操作,然后根据相应的各语句确定它的执行次数,再找出T(n)的同数量级(它的同数量级有以下:1,Log2n ,n ,nLog2n ,n的平方,n的三次方,2的n次方,n!),找出后,f(n)=该数量级,若T(n)/f(n)求极限可得到一常数c,则时间复杂度T(n)=O(f(n))


      c[ i ][ j ]=0; //该步骤属于基本操作执行次数:n的平方 次
      c[ i ][ j ]+=a[ i ][ k ]*b[ k ][ j ]; //该步骤属于基本操作 执行次数:n的三次方 次

      则有 T(n)= n的平方+n的三次方,根据上面括号里的同数量级,我们可以确定 n的三次方 为T(n)的同数量级
      则有f(n)= n的三次方,然后根据T(n)/f(n)求极限可得到常数c
      则该算法的 时间复杂度:T(n)=O(n的三次方)


      k次方阶O(nk), 指数阶O(2n) 。随着问题规模n的不断增大,上述时间复杂度不断增大,算法的执行效率越低。




    定义:如果一个问题的规模是n,解这一问题的某一算法所需要的时间为T(n),它是n的某一函数 T(n)称为这一算法的“时间复杂性”。




    “大O记法”:在这种描述中使用的基本参数是 n,即问题实例的规模,把复杂性或运行时间表达为n的函数。这里的“O”表示量级 (order),比如说“二分检索是 O(logn)的”,也就是说它需要“通过logn量级的步骤去检索一个规模为n的数组”记法 O ( f(n) )表示当 n增大时,运行时间至多将以正比于 f(n)的速度增长。

    这种渐进估计对算法的理论分析和大致比较是非常有价值的,但在实践中细节也可能造成差异。例如,一个低附加代价的O(n2)算法在n较小的情况下可能比一个高附加代价的 O(nlogn)算法运行得更快。当然,随着n足够大以后,具有较慢上升函数的算法必然工作得更快。





    2.1. 交换i和j的内容

    sum=0; (一次)
    for(i=1;i<=n;i++) (n次)
    for(j=1;j<=n;j++) (n^2次)
    sum++; (n^2次)

    解:T(n)=2n^2+n+1 =O(n^2)


    for (i=1;i<n;i++)
    y=y+1; ① 
    for (j=0;j<=(2*n);j++) 
    x++; ② 



    b=1; ①
    for (i=1;i<=n;i++) ②

    s=a+b;    ③
    b=a;     ④ 
    a=s;     ⑤

    语句2的频度: n, 
    语句3的频度: n-1, 

    O(log2n )


    i=1; ①
    while (i<=n)
    i=i*2; ②

    解: 语句1的频度是1, 
    设语句2的频度是f(n), 则:2^f(n)<=n;f(n)<=log2n 
    取最大值f(n)= log2n,
    T(n)=O(log2n )





    解:当i=m, j=k的时候,内层循环的次数为k当i=m时, j 可以取 0,1,…,m-1 , 所以这里最内循环共进行了0+1+…+m-1=(m-1)m/2次所以,i从0取到n, 则循环共进行了: 0+(1-1)*1/2+…+(n-1)n/2=n(n+1)(n-1)/6所以时间复杂度为O(n^3).

    我们还应该区分算法的最坏情况的行为和期望行为。如快速排序的最坏情况运行时间是 O(n^2),但期望时间是 O(nlogn)。通过每次都仔细地选择基准值,我们有可能把平方情况 (即O(n^2)情况)的概率减小到几乎等于 0。在实际中,精心实现的快速排序一般都能以 (O(nlogn)时间运行。


    访问数组中的元素是常数时间操作,或说O(1)操作。一个算法如 果能在每个步骤去掉一半数据元素,如二分检索,通常它就取 O(logn)时间。用strcmp比较两个具有n个字符的串需要O(n)时间。常规的矩阵乘算法是O(n^3),因为算出每个元素都需要将n对 元素相乘并加到一起,所有元素的个数是n^2。

    指数时间算法通常来源于需要求出所有可能结果。例如,n个元 素的集合共有2n个子集,所以要求出所有子集的算法将是O(2n)的。指数算法一般说来是太复杂了,除非n的值非常小,因为,在 这个问题中增加一个元素就导致运行时间加倍。不幸的是,确实有许多问题 (如著名的“巡回售货员问题” ),到目前为止找到的算法都是指数的。如果我们真的遇到这种情况,通常应该用寻找近似最佳结果的算法替代之。




    一  大O记号



    证明:当n>3的时候,2*n +2<3n,所以可选n0=3,c=3,则n>n0的时候,f(n)<c*(n),所以f(n)=O(n)。




    二  Ω记号





    三  Θ记号


    四  小o记号


    五  例子


    则f(n)=O(n^2)或者f(n) = O(n^3)或者f(n)=O(n^4)或者……



    f(n) = o(n^3)或者f(n)=o(n^4)或者f(n)=o(n^5)或者……








  • 算法时间复杂度计算 算法的时间复杂度 (Time Complexity of Algorithms) For any defined problem, there can be N number of solution. This is true in general. If I have a problem and I discuss about the ...


    For any defined problem, there can be N number of solution. This is true in general. If I have a problem and I discuss about the problem with all of my friends, they will all suggest me different solutions. And I am the one who has to decide which solution is the best based on the circumstances.

    对于任何定义的问题,可以有N个解决方案。 总的来说,这是对的。 如果我有问题,并且与所有朋友讨论该问题,他们都会为我提供不同的解决方案。 我是根据情况决定哪种解决方案最好的人。

    Similarly for any problem which must be solved using a program, there can be infinite number of solutions. Let's take a simple example to understand this. Below we have two different algorithms to find square of a number(for some time, forget that square of any number n is n*n):

    类似地,对于必须使用程序解决的任何问题,可以有无数个解决方案。 让我们举一个简单的例子来理解这一点。 下面我们有两种不同的算法来找到一个数字的平方(一段时间后,忘记任何数字n平方是n*n ):

    One solution to this problem can be, running a loop for n times, starting with the number n and adding n to it, every time.


        we have to calculate the square of n
    for i=1 to n
        do n = n + n
    // when the loop ends n will hold its square
    return n

    Or, we can simply use a mathematical operator * to find the square.


        we have to calculate the square of n
    return n*n

    In the above two simple algorithms, you saw how a single problem can have many solutions. While the first solution required a loop which will execute for n number of times, the second solution used a mathematical operator * to return the result in one line. So which one is the better approach, of course the second one.

    在以上两种简单算法中,您看到了单个问题如何具有许多解决方案。 第一种解决方案要求循环执行n次,而第二种解决方案使用数学运算符*将结果返回一行。 因此,哪种方法更好,当然是第二种。

    什么是时间复杂度? (What is Time Complexity?)

    Time complexity of an algorithm signifies the total time required by the program to run till its completion.


    The time complexity of algorithms is most commonly expressed using the big O notation. It's an asymptotic notation to represent the time complexity. We will study about it in detail in the next tutorial.

    算法的时间复杂度通常使用大O表示法表示 。 这是一种渐进符号,表示时间复杂度。 我们将在下一个教程中详细研究它。

    Time Complexity is most commonly estimated by counting the number of elementary steps performed by any algorithm to finish execution. Like in the example above, for the first code the loop will run n number of times, so the time complexity will be n atleast and as the value of n will increase the time taken will also increase. While for the second code, time complexity is constant, because it will never be dependent on the value of n, it will always give the result in 1 step.

    时间复杂度通常是通过计算任何算法完成执行的基本步骤的数量来估算的。 像上面的示例一样,对于第一个代码,循环将运行n次,因此时间复杂度将至少为n并且随着n值的增加,花费的时间也会增加。 对于第二个代码,时间复杂度是恒定的,因为它永远不会依赖于n的值,它将始终以1步给出结果。

    And since the algorithm's performance may vary with different types of input data, hence for an algorithm we usually use the worst-case Time complexity of an algorithm because that is the maximum time taken for any input size.

    而且由于算法的性能可能随输入数据的不同类型而变化,因此对于一种算法,我们通常使用最坏情况下的时间复杂度 ,因为这是任何大小的输入所花费的最大时间。

    计算时间复杂度 (Calculating Time Complexity)

    Now lets tap onto the next big topic related to Time complexity, which is How to Calculate Time Complexity. It becomes very confusing some times, but we will try to explain it in the simplest way.

    现在,让我们点击与时间复杂度有关的下一个大话题,即如何计算时间复杂度。 有时候它会变得很混乱,但是我们将尝试以最简单的方式进行解释。

    Now the most common metric for calculating time complexity is Big O notation. This removes all constant factors so that the running time can be estimated in relation to N, as N approaches infinity. In general you can think of it like this :

    现在,用于计算时间复杂度的最常用指标是Big O表示法。 这消除了所有恒定因素,因此当N接近无穷大时,可以相对于N估算运行时间。 通常,您可以这样考虑:


    Above we have a single statement. Its Time Complexity will be Constant. The running time of the statement will not change in relation to N.

    上面我们有一个声明。 它的时间复杂度将是恒定的 。 语句的运行时间不会相对于N改变。

    for(i=0; i < N; i++)

    The time complexity for the above algorithm will be Linear. The running time of the loop is directly proportional to N. When N doubles, so does the running time.

    上述算法的时间复杂度将为线性 。 循环的运行时间与N成正比。当N加倍时,运行时间也成正比。

    for(i=0; i < N; i++) 
        for(j=0; j < N;j++)

    This time, the time complexity for the above code will be Quadratic. The running time of the two loops is proportional to the square of N. When N doubles, the running time increases by N * N.

    这次,以上代码的时间复杂度将是Quadratic 。 两个循环的运行时间与N的平方成正比。当N翻倍时,运行时间增加N *N。

    while(low <= high) 
        mid = (low + high) / 2;
        if (target < list[mid])
            high = mid - 1;
        else if (target > list[mid])
            low = mid + 1;
        else break;

    This is an algorithm to break a set of numbers into halves, to search a particular field(we will study this in detail later). Now, this algorithm will have a Logarithmic Time Complexity. The running time of the algorithm is proportional to the number of times N can be divided by 2(N is high-low here). This is because the algorithm divides the working area in half with each iteration.

    这是一种将一组数字分成两半,以搜索特定字段的算法(我们将在后面详细研究)。 现在,该算法将具有对数时间复杂度。 该算法的运行时间与N可以除以2的次数成正比(此处N为高-低)。 这是因为该算法在每次迭代中将工作区域分为两半。

    void quicksort(int list[], int left, int right)
        int pivot = partition(list, left, right);
        quicksort(list, left, pivot - 1);
        quicksort(list, pivot + 1, right);

    Taking the previous algorithm forward, above we have a small logic of Quick Sort(we will study this in detail later). Now in Quick Sort, we divide the list into halves every time, but we repeat the iteration N times(where N is the size of list). Hence time complexity will be N*log( N ). The running time consists of N loops (iterative or recursive) that are logarithmic, thus the algorithm is a combination of linear and logarithmic.

    继续前面的算法,上面我们有一个快速排序的小逻辑(我们将在后面详细研究)。 现在在“快速排序”中,我们每次都将列表分成两半,但是我们将迭代重复N次(其中N是列表的大小)。 因此,时间复杂度将为N * log(N) 。 运行时间由N个对数的循环(迭代或递归)组成,因此该算法是线性和对数的组合。

    NOTE: In general, doing something with every item in one dimension is linear, doing something with every item in two dimensions is quadratic, and dividing the working area in half is logarithmic.


    时间复杂度的注释类型 (Types of Notations for Time Complexity)

    1. Big Oh denotes "fewer than or the same as" <expression> iterations.

      Big Oh表示“ 小于或等于 ” <expression>迭代。

    2. Big Omega denotes "more than or the same as" <expression> iterations.

      Big Omega表示“ 大于或等于 ” <expression>迭代。

    3. Big Theta denotes "the same as" <expression> iterations.

      大Theta表示“ <expression>迭代相同。

    4. Little Oh denotes "fewer than" <expression> iterations.

      小哦表示少于 “ <expression>个迭代。

    5. Little Omega denotes "more than" <expression> iterations.

      小欧米茄表示的不只是 <expression>迭代。

    通过示例了解时间复杂度的表示法 (Understanding Notations of Time Complexity with Example)

    O(expression) is the set of functions that grow slower than or at the same rate as expression. It indicates the maximum required by an algorithm for all input values. It represents the worst case of an algorithm's time complexity.

    O(expression)是一组比expression慢或以相同速率增长的函数。 它指示算法对所有输入值所需的最大值。 它代表了算法时间复杂度的最坏情况。

    Omega(expression) is the set of functions that grow faster than or at the same rate as expression. It indicates the minimum time required by an algorithm for all input values. It represents the best case of an algorithm's time complexity.

    Omega(expression)是一组比expression更快或以相同速度增长的函数。 它指示算法对所有输入值所需的最短时间。 它代表了算法时间复杂度的最佳情况。

    Theta(expression) consist of all the functions that lie in both O(expression) and Omega(expression). It indicates the average bound of an algorithm. It represents the average case of an algorithm's time complexity.

    Theta(表达式)由位于O(表达式)和Omega(表达式)中的所有函数组成。 它表示算法的平均界限。 它代表算法时间复杂度的平均情况。

    Suppose you've calculated that an algorithm takes f(n) operations, where,


    f(n) = 3*n^2 + 2*n + 4.   // n^2 means square of n

    Since this polynomial grows at the same rate as n2, then you could say that the function f lies in the set Theta(n2). (It also lies in the sets O(n2) and Omega(n2) for the same reason.)

    由于此多项式以与n 2相同的速率增长,因此可以说函数f位于集合Theta(n 2 )中 。 (出于相同的原因,它也位于集合O(n 2 )Omega(n 2 )中。)

    The simplest explanation is, because Theta denotes the same as the expression. Hence, as f(n) grows by a factor of n2, the time complexity can be best represented as Theta(n2).

    最简单的解释是,因为Theta与表达式相同 。 因此,随着f(n)增长n 2倍,时间复杂度可以最好地表示为Theta(n 2 )

    翻译自: https://www.studytonight.com/data-structures/time-complexity-of-algorithms


  • 算法 时间复杂度

    2021-07-14 15:38:47
    算法 时间复杂度前言一、内容总结 前言 记录下算法时间复杂度 一、内容 以下是维基百科的解释,我感觉是太官方了: 在计算机科学中,算法时间复杂度(Time complexity)是一个函数,它定性描述该算法的运行...

    算法 时间复杂度


    记录下算法的 时间复杂度



    在计算机科学中,算法的时间复杂度(Time complexity)是一个函数,它定性描述该算法的运行时间。这是一个代表算法输入值的字符串的长度的函数。时间复杂度常用大O符号表述,不包括这个函数的低阶项和首项系数。使用这种方式时,时间复杂度可被称为是渐近的,亦即考察输入值大小趋近无穷时的情况。例如,如果一个算法对于任何大小为 n (必须比 n0 大)的输入,它至多需要 5n3 + 3n 的时间运行完毕,那么它的渐近时间复杂度是 O(n3)。 为了计算时间复杂度,我们通常会估计算法的操作单元数量,每个单元运行的时间都是相同的。因此,总运行时间和算法的操作单元数量最多相差一个常量系数。 相同大小的不同输入值仍可能造成算法的运行时间不同,因此我们通常使用算法的最坏情况复杂度,记为 T(n) ,定义为任何大小的输入 n 所需的最大运行时间。另一种较少使用的方法是平均情况复杂度,通常有特别指定才会使用。时间复杂度可以用函数 T(n) 的自然特性加以分类,举例来说,有着 T(n) = O(n) 的算法被称作“线性时间算法”;而 T(n) = O(Mn) 和 Mn= O(T(n)) ,其中 M ≥ n > 1 的算法被称作“指数时间算法”。



    1.要剔除掉你算好的函数的低阶项和首项系数,举得例子就很好:5n3 + 3n 直接就处理成了O(n3)。




  • 算法时间复杂度分析

    2020-01-05 17:00:26
    算法时间复杂度分析 在看一个算法是否优秀时,我们一般都要考虑一个算法的时间复杂度和空间复杂度。现在随着空间越来越大,时间复杂度成了一个算法的重要指标,那么如何估计一个算法的时间复杂度呢? 时间复杂度...
  • 算法时间复杂度

    2020-08-02 22:04:04
    最近在看数据结构,以下是自己对算法时间复杂度的一个简单理解 1 什么是时间复杂度 时间复杂度就是一个程序运行的时间,可以简单理解一个程序所有代码行数,其中重点关注核心代码执行次数,如执行一个n次for循环,...
  • 排序算法时间复杂度、空间复杂度、稳定性比较

    万次阅读 多人点赞 2017-07-30 21:33:22
    平均时间复杂度 最坏时间复杂度 空间复杂度 是否稳定 冒泡排序 :————-: :—–: :—–: :—–: 选择排序 :————-: :—–: :—–: :—–: 直接插入排序 :————-: :—–: :—–: :—–: ...
  • 算法时间复杂度和空间复杂度合称为算法的复杂度。 1.时间复杂度 (1)时间频度一个算法执行所耗费的时间,从理论上是不能算出来的,必须上机运行测试才能知道。但我们不可能也没有必要对每个算法都上机测试,只...
  • 1.算法时间复杂度 算法时间复杂度的定义:在进行算法分析时,语句总的执行次数T(n)是关于问题规模n的函数,进而分析T(n)随n的变化情况并确定T(n)的数量级。 算法的时间复杂度,也就是算法的时间量度,记作:T(n)= ...
  • 算法的时间复杂度和空间复杂度-总结 通常,对于一个给定的算法,我们要做 两项分析。第一是从数学上证明算法的正确性,这一步主要用到形式化证明的方法及相关推理...因此,作为程序员,掌握基本的算法时间复杂度分...
  • 正式工作也有3年的时间了,想要写出更加优雅的代码。所以最近在刷leetcode补充数据结构和算法方面的知识。学校里虽然学过,但是仅仅是有个大概的认识。... 有两个衡量优劣的维度:时间复杂度和空...
  • 算法时间复杂度和空间复杂度合称为算法的复杂度。 1.时间复杂度 (1)时间频度 一个算法执行所耗费的时间,从理论上是不能算出来的,必须上机运行测试才能知道。但我们不可能也没有必要对每个算法都上机测试,...
  • 算法空间复杂度和时间复杂度 算法的空间复杂度 (Space Complexity of Algorithms) Whenever a solution to a problem is written some memory is required to complete. For any algorithm memory may be used for ...
  • 算法时间复杂度和空间复杂度合称为算法的复杂度。 1.时间复杂度 (1)时间频度一个算法执行所耗费的时间,从理论上是不能算出来的,必须上机运行测试才能知道。但我们不可能也没有必要对每个算法都上机测试,只...
  • 计算算法时间复杂度

    千次阅读 2017-11-27 16:05:00
    常见的算法时间复杂度由小到大依次:  Ο(1)Ο(logn)Ο(n)Ο(nlogn)Ο(n2)Ο(n3)…Ο(2)Ο(n!) Ο(1)表示基本语句的执行次数是一个常数,一般来说,只要算法中不存在循环语句,其时间复杂度就是Ο(1)。Ο(logn)、...
  • 算法时间复杂度分析 在计算机程序编写前,依据统计方法对算法进行估算,经过总结,我们发现一个高级语言编写的程序程序在计算机上运行所消耗的时间取决于下列因素: 1.算法采用的策略和方案; ⒉编译产生的代码质量; 3...



1 2 3 4 5 ... 20
收藏数 144,432
精华内容 57,772