• 欢迎使用Wanna One扩展程序，它为Wanna One的崇拜者添加了自定义图像。 享受各种各样的图像和许多其他一流的功能。 让我们向您介绍一下我们的最新元素-*在工作场所中使用“ Wanna One”标签页-我们为您添加了TDL服装...
• 为新标签页获取新的Wanna one kpop壁纸背景 这是我们的新的和令人敬畏的想要一个kpop壁纸高清扩展。 我们的任务和主要职责是在我们的Wanna one kpop Wallpapers HD Theme中为您提供舒适和令人难以置信的美丽。 我们...
• 新的标签主题与粉丝高清壁纸，为粉丝的Wanna One。 KPOP WANNA ONE NEW TAB-由FreeAddon提供安装我的Kpop Wanna One全新标签主题，并在每次打开新标签页时享受Kpop Wanna One的各种高清壁纸。 ★您可以从我们的Kpop ...
•  The second line contains one integer M (0), which is the number of roads.  The following M lines are the information of the roads. Each line contains three integers A, B and T, which means the road...


时间限制：1秒
空间限制：65536K
热度指数：613

算法知识视频讲解

题目描述

The country is facing a terrible civil war----cities in the country are divided into two parts supporting different leaders. As a merchant, Mr. M does not pay attention to politics but he actually knows the severe situation, and your task is to help him reach home as soon as possible.     "For the sake of safety,", said Mr.M, "your route should contain at most 1 road which connects two cities of different camp."     Would you please tell Mr. M at least how long will it take to reach his sweet home?

输入描述:
The input contains multiple test cases.
The first line of each case is an integer N (2<=N<=600), representing the number of cities in the country.
The second line contains one integer M (0<=M<=10000), which is the number of roads.
The following M lines are the information of the roads. Each line contains three integers A, B and T, which means the road between city A and city B will cost time T. T is in the range of [1,500].
Next part contains N integers, which are either 1 or 2. The i-th integer shows the supporting leader of city i.
To simplify the problem, we assume that Mr. M starts from city 1 and his target is city 2. City 1 always supports leader 1 while city 2 is at the same side of leader 2.
Note that all roads are bidirectional and there is at most 1 road between two cities.
Input is ended with a case of N=0.
输出描述:
For each test case, output one integer representing the minimum time to reach home.
If it is impossible to reach home according to Mr. M's demands, output -1 instead.

示例1

输入

2
1
1 2 100
1 2
3
3
1 2 100
1 3 40
2 3 50
1 2 1
5
5
3 1 200
5 3 150
2 5 160
4 3 170
4 2 170
1 2 2 2 1
0

输出

100
90

540
图论，找两个派的城市。用dijastra算法计算出两个团里面其他店离起点或终点最近的距离。
//#include "stdafx.h"
#include"stdio.h"
#include<iostream>
#include<cstring>
using namespace std;
typedef long long ll;
const int maxn=650;
int n,m;
int u,v,c;
vector<int>vec[2];//分别记录两个团里面的点
int G[maxn][maxn];
int a;
int d[maxn];
int vis[maxn]={0};
const int inf=1000000000;
void dij(int s,int f ){//f=0时计算第一个团里面的其他点到1的最短距离，f=1时计算第二个团里面的其他点到2的最短距离

d[s]=0;
for(int i=0;i<vec[f].size();i++)
{int min=inf,u=-1;
for(int j=0;j<vec[f].size();j++){
int v=vec[f][j];
if(!vis[v]&&min>d[v]){
min=d[v];
u=v;
}}
if(u==-1)return;
vis[u]=1;
for(int k=0;k<vec[f].size();k++)
{int v=vec[f][k];
int w=G[u][v];
if(w!=inf&&!vis[v]&&d[u]+w<d[v]){
d[v]=d[u]+w;

}

}

}

}

int main(){

//	freopen("c://jin.txt","r",stdin);

while(cin>>n){
fill(G[0],G[0]+maxn*maxn,inf);
vec[0].clear();//注意每次新输入一组数据保证把原来的两个团清空
vec[1].clear();
cin>>m;
while(m--){
cin>>u>>v>>c;
if(G[u][v]>c){
G[u][v]=c;
G[v][u]=c;
}}
for(int i=1;i<=n;i++)
{cin>>a;
if(a==1)vec[0].push_back(i);
else vec[1].push_back(i);
}
fill(d,d+maxn,inf);
memset(vis,0,sizeof(vis));
dij(1,0);
dij(2,1);

int ans=inf;
for(int i=0;i<vec[0].size();i++){
int u=vec[0][i];
for(int j=0;j<vec[1].size();j++)
{
int v=vec[1][j];

if(G[u][v]!=inf&&ans>d[u]+d[v]+G[u][v])//选择u,v为过渡节点，保证只有u-v一条边连接两个团
{

ans=d[u]+d[v]+G[u][v];//距离为三部分加和

}
}}
if(ans!=inf)
cout<<ans<<endl;
else cout<<"-1"<<endl;

}

//	freopen("CON","r",stdin);
//	system("pause");

return 0;
}


展开全文
•  For each test case, output one integer representing the minimum time to reach home.  If it is impossible to reach home according to Mr. M's demands, output -1 instead. 样例输入： 2...
这两天做题被虐的难受..老是莫名其妙的不通过 ..也一直没更博
写了这道题，AC了，算是找回点自信，还是在机试指南的提示下做出来的
嗯，看下题目吧
—————————————————————————————————————————————————————

题目描述：

The country is facing a terrible civil war----cities in the country are divided into two parts supporting different leaders. As a merchant, Mr. M does not pay attention to politics but he actually knows the severe situation, and your task is to help him reach home as soon as possible.      "For the sake of safety,", said Mr.M, "your route should contain at most 1 road which connects two cities of different camp."     Would you please tell Mr. M at least how long will it take to reach his sweet home?

输入：

The input contains multiple test cases.     The first line of each case is an integer N (2<=N<=600), representing the number of cities in the country.     The second line contains one integer M (0<=M<=10000), which is the number of roads.     The following M lines are the information of the roads. Each line contains three integers A, B and T, which means the road between city A and city B will cost time T. T is in the range of [1,500].     Next part contains N integers, which are either 1 or 2. The i-th integer shows the supporting leader of city i.      To simplify the problem, we assume that Mr. M starts from city 1 and his target is city 2. City 1 always supports leader 1 while city 2 is at the same side of leader 2.      Note that all roads are bidirectional and there is at most 1 road between two cities. Input is ended with a case of N=0.

输出：

For each test case, output one integer representing the minimum time to reach home.     If it is impossible to reach home according to Mr. M's demands, output -1 instead.

样例输入：

2
1
1 2 100
1 2
3
3
1 2 100
1 3 40
2 3 50
1 2 1
5
5
3 1 200
5 3 150
2 5 160
4 3 170
4 2 170
1 2 2 2 1
0

样例输出：

100
90
540

分析：基本思想还是迪杰斯特拉算法，与常规不同的是，只允许从一个阵营到另一个阵营的路只允许最多一条，这该怎么翻译成数学语言呢？其实也就是“有去无回”，只要到了另外一个阵营，不能从其他路往原来的阵营走回去就可以了，所以，录入边信息的时候，跨阵营的边只单向录入，过了另一个阵营就没有回头路了。

还有点无关话题需要注意：当需要输出路径的时候，定义一个pre[N]数组，在更新邻边的时候，存放更新结点的先驱结点为当前节点，这样通过栈或队列就可以实现倒叙或顺序输出路径
代码如下：

#include <stdio.h>
#include <vector>
using namespace std;

struct E{
int start;
int next;
int cost;
}buff[10001];

vector<E> edge[10010];
int Cost[601];
bool mark[601];
int camp[601];
int main(int argc, char** argv) {
int N,M;
while(scanf("%d",&N)!=EOF)
{
if(N==0) break;
scanf("%d",&M);
int i;
//初始化vector
for(i=1;i<=N;i++) edge[i].clear();
//输入道路信息,暂存在数组里
for(i=1;i<=M;i++)
{
scanf("%d%d%d",&buff[i].start,&buff[i].next,&buff[i].cost);
}
//输入阵营信息
for(i=1;i<=N;i++)
{
scanf("%d",&camp[i]);
}
//正式录入边信息
for(i=1;i<=M;i++)
{
int a=buff[i].start;
int b=buff[i].next;
E tmp;
tmp.cost=buff[i].cost;
//如果是同一阵营则可双向，录入两次
if(camp[a]==camp[b])
{   tmp.start=a;
tmp.next=b;
edge[a].push_back(tmp);
tmp.start=b;
tmp.next=a;
edge[b].push_back(tmp);
}
//如果不是同一阵营，只录入从阵营1->2
if(camp[a]!=camp[b])
{
if(camp[a]==1){
tmp.start=a;
tmp.next=b;
edge[a].push_back(tmp);
}
else
{
tmp.start=b;
tmp.next=a;
edge[b].push_back(tmp);
}
}
}//录入结束

//初始化Cost
for(i=1;i<=N;i++)
{
Cost[i]=-1;
mark[i]=false;
}

int newP=1;
Cost[1]=0;
mark[1]=true;
//只循环N-1次就可以了
for(i=1;i<N;i++)
{
//更新新成员的临边信息
int j;
for(j=0;j<edge[newP].size();j++)
{
int t =edge[newP][j].next;
int c=edge[newP][j].cost;
//如果已经在集合中，则跳过
if(mark[t]==true) continue;
//如果距离临点的距离可更新
if(Cost[t]==-1||Cost[t]>Cost[newP]+c)
{
Cost[t]=Cost[newP]+c;
}
}

//选取下一个新点
int min=123123123;
for(j=1;j<=N;j++)
{
if(mark[j]==true) continue;
if(Cost[j]==-1) continue;
if(Cost[j]<min)
{
min=Cost[j];
newP=j;
}
}

mark[newP]=true;

}//大for循环
if(Cost[2]!=-1)
printf("%d\n",Cost[2]);
else
printf("%d\n",-1);
}//最外层循环

return 0;
}


展开全文
• Problem description ...There is a game called "I Wanna Be the Guy", consisting ofnlevels. Little X and his friend Little Y are addicted to the game. Each of them wants to pass the whole game. Li...

Problem description
There is a game called "I Wanna Be the Guy", consisting of n levels. Little X and his friend Little Y are addicted to the game. Each of them wants to pass the whole game.
Little X can pass only p levels of the game. And Little Y can pass only q levels of the game. You are given the indices of levels Little X can pass and the indices of levels Little Y can pass. Will Little X and Little Y pass the whole game, if they cooperate each other?
Input
The first line contains a single integer n (1 ≤  n ≤ 100).
The next line contains an integer p (0 ≤ p ≤ n) at first, then follows p distinct integers a1, a2, ..., ap (1 ≤ ai ≤ n). These integers denote the indices of levels Little X can pass. The next line contains the levels Little Y can pass in the same format. It's assumed that levels are numbered from 1 to n.
Output
If they can pass all the levels, print "I become the guy.". If it's impossible, print "Oh, my keyboard!" (without the quotes).
Examples

Input
43 1 2 32 2 4

Output
I become the guy.

Input
43 1 2 32 2 3

Output
Oh, my keyboard!

Note
In the first sample, Little X can pass levels [1 2 3], and Little Y can pass level [2 4], so they can pass all the levels both.
In the second sample, no one can pass level 4.
解题思路：标记一下1~n出现的数字，如果都出现了，则输出"I become the guy."，否则输出"Oh, my keyboard!"，水过！
AC代码：

1 #include<bits/stdc++.h>
2 using namespace std;
3 int main(){
4     int n,p,q,x;bool flag=false,used[105];
5     memset(used,false,sizeof(used));
6     cin>>n>>p;
7     for(int i=1;i<=p;++i){cin>>x;used[x]=true;}
8     cin>>q;
9     for(int i=1;i<=q;++i){cin>>x;used[x]=true;}
10     for(int i=1;i<=n;++i)
11         if(!used[i]){flag=true;break;}
12     if(flag)cout<<"Oh, my keyboard!"<<endl;
13     else cout<<"I become the guy."<<endl;
14     return 0;
15 }

转载于:https://www.cnblogs.com/acgoto/p/9159193.html
展开全文
• So I wanna take another step of the gradient descent, so my new derivative’s slope is smaller. So I’ll take another step of gradient descent. I will naturally take a somewhat smaller step from this...
摘要: 本文是吴恩达 (Andrew Ng)老师《机器学习》课程，第二章《单变量线性回归》中第11课时《梯度下降的直观认识》的视频原文字幕。为本人在视频学习过程中逐字逐句记录下来以便日后查阅使用。现分享给大家。如有错误，欢迎大家批评指正，在此表示诚挚地感谢！同时希望对大家的学习能有所帮助。
In the previous video (article), we gave a mathematical definition of gradient descent. Let's delve deeper, and in this video (article), get better intuition about what the algorithm is doing, and why the steps of the gradient descent algorithm might make sense.

Here's the gradient descent algorithm that we saw last time. And just remind you, this parameter, or this term $\alpha$ is called the learning rate. And it controls how big a step we take when updating my parameter $\theta _{j}$. And this second term here is the derivative term. And what I want to do in this video is give you better intuition about what each of these two terms is doing and why, when put together, this entire update make sense.
In order to convey these intuitions, what I want to do is use a slightly simpler example where we want to minimize the function of just one parameter. So say we have a cost function J of just one parameter, $\theta _{1}$, like we did, you know, a few videos back. Where theta one is a real number, okay? Just so we can have 1D plots, which are a little bit simpler to look at. Let's try to understand what gradient descent would do on this function.

So, let’s say here's my function $J(\theta _{1})$, where $\theta _{1}$ is a real number. Right? Now let’s say I have initialized gradient descent with $\theta _{1}$ at this location. So imagine that we start off at that point on my function. What gradient descent will do, is it will update
$\theta_{1}=\theta _{1}-\alpha \frac{\mathrm{d} }{\mathrm{d} \theta _{1}}J(\theta _{1})$
And just as an aside you know this derivative term, right? If you're wondering why I changed the notation from these partial derivative symbols. If you don't know what the difference between these partial derivative symbols is with the ${\frac{\mathrm{d} }{\mathrm{d} \theta_{1} }}$, don’t worry about it. Technically in mathematics we call this a partial derivative (${\frac{\partial }{\partial \theta _{1}}}$), we call this a derivative (${\frac{\mathrm{d} }{\mathrm{d} \theta_{1} }}$), depending on the number of parameters in the function J, but that's a mathematical technicality, you know, for the purpose of this lecture, think of these partial symbols, and ${\frac{\mathrm{d} }{\mathrm{d} \theta_{1} }}$ as exactly the same thing. And, don't worry about whether there is any difference. I’m gonna try to use the mathematically precise notation. But for our purposes, these notations are really the same thing. So, let's see what this equation will do. And so, we are going to compute this derivative of. I'm not sure if you've seen derivatives in calculus before. But what a derivative does is basically saying, let’s take the tangent of that point, like that straight line, the red line, just touching this function. And let’s look at the slope of this red line. That is where the derivative is. It says what is the slope of the line that is just tangent to the function? Ok, and the slope of the line is of course is just the height divided by this horizontal thing. Now, this line has a positive slope, so it has a positive derivative. And so, my update to theta is going to be $\theta _{1}$ gives the update that $\theta _{1}$ minus $\alpha$ times some positive number. $\alpha$, the learning rate is always a positive number. And so, I’m gonna to take $\theta _{1}$, this update as $\theta _{1}$ minus something like, end up moving $\theta _{1}$ to the left, decrease $\theta _{1}$. And we can see this is the right thing to do, because I actually went ahead in this direction to get me closer to the minimum over there. So gradient descent so far seems to be doing the right thing.

Let's look at another example. So let's take my same function J. Just trying to draw the same function $J(\theta _{1})$. Now let’s say I had instead initialized my parameter over there on the left. So $\theta _{1}$ is here, I’m gonna add that point on the surface. Now my derivative term, ${\frac{\mathrm{d} }{\mathrm{d} \theta_{1} }}$, when evaluated at this point, gonna look at the slope of that line. So this derivative term is a slop of this line. But this line is slanting down, so this line has negative slope. Or alternatively I say that this function has negative derivative, just means negative slope at that point. So this is less than equal to zero. So when I update $\theta _{1}$, $\theta _{1}$ is updated as $\theta _{1}$ minus alpha times a negative number. And so I have $\theta _{1}$ minus a negative number which means I’m actually going to increase $\theta _{1}$, right? Because this is minus of a negative number means I'm adding something to $\theta _{1}$. And what that means is that I’m going to end up increasing $\theta _{1}$. And so we start here, increase $\theta _{1}$, which again seems like the thing I want to do, to try to get me closer to the minimum. So, this hopefully explains the intuition behind what the derivative term is doing. Let's next take a look at the learning rate $\alpha$, and try to figure out what that's doing.

So, here's my gradient descent update rule. Right, there's this equation. And let's look at what can happen if $\alpha$ is either too small, or if $\alpha$ is too large. So this first example, what happens if $\alpha$ is too small. So, here's my function $J(\theta )$. Let's just start here. If $\alpha$ is too small, then what I'm going to do is gonna multiply the update by some small number. So end up taking, you know, it's like a baby step like that. Okay, so that's one step. Then from this new point we're gonna take another step. But if $\alpha$ is too small, let's take another baby step. And so if my learning rate is too small, I'm gonna end up, you know, taking these tiny, tiny baby steps to try to get to the minimum. And I'm gonna need a lot of steps to get to the minimum. And so if $\alpha$ is too small, gradient descent can be slow because it's gonna take these tiny, tiny baby steps. And it's gonna need a lot of steps before it gets anyway close to the global minimum.

Now how about if the $\alpha$ is too large. So, here's my function $J(\theta _{1})$. Turns out if $\alpha$ is too large, then gradient descent can overshoot a minimum, and may even fail to converge or even diverge. Let's say a start of data there. It's actually pretty close to the minimum. So the derivative points to the right, but if $\alpha$ is too big, I'm gonna take a huge step. Maybe I'm gonna take a huge step like that. Now, my cost function has got worse, because it starts off from this value, but now my value has got worse. Now my derivatives point to the left, it's actually decrease $\theta _{1}$. But look, if my learning rate is too big, I may take a huge step going from here all the way out there. So I end up going there, right? And if my learning rate is too big, it can take another huge step on the next iteration, and kind of overshoot and overshoot and so on until you notice I'm actually getting further and further away from the minimum. And so if $\alpha$ is too large it can fail to converge or even diverge.

Now, I have another question for you. So, this is a tricky one. And when I was first learning this stuff, it actually took me a long time to figure this out. What if your parameter theta one is already at a local minimum? What do you think one step of gradient descent will do? So, let's suppose you initialize $\theta _{1}$ at a local minimum. So suppose this is your initial value of $\theta _{1}$ over here. And it's already at a local optimum or the local minimum. It turns out that at local optimum your derivative would be equal to zero. Since it’s that slope where it’s that tangent point, so the slope of this line will be equal to zero, and thus this derivative term is equal to zero. And so in your gradient descent update, you have $\theta _{1}$, gives update that $\theta _{1}$, minus $\alpha$ times zero. And so, what it means is that, if you’re already at a local optimum, it leaves $\theta _{1}$ unchanged because this, you know, updates $\theta _{1}$ equals $\theta _{1}$. So if your parameter is already at a local minimum, one step of gradient descent does absolutely nothing. It doesn’t change parameter, which is what you want, because it keeps your solution at the local minimum.

This also explains why gradient descent can converge the local minimum, even with the learning rate $\alpha$ fixed. Here's what I mean by that. Let's look at an example. Here is a cost function $J(\theta )$ that maybe I want to minimize. And let’s say I initialize my algorithm my gradient descent algorithm, you know, out there at the magenta point. If I take one step of gradient descent you know, maybe it’ll take me to that point, because my derivative is pretty steep out there, right? Now I’m at this green point, and if I take another step of gradient descent, you notice that my derivative, meaning the slope, is less steep at the green point then compared to at the magenta point out there, right? Because as I approach the minimum, my derivative gets closer and closer to zero as I approach the minimum. So I wanna take another step of the gradient descent, so my new derivative’s slope is smaller. So I’ll take another step of gradient descent. I will naturally take a somewhat smaller step from this green point than I did from the magenta point. Now I am at the new point, the red point, and then now even closer to the global minimums, so the derivative here will be even smaller than it was at the green point. So, when I take another step of gradient descent, you know, now my derivative term is even smaller, and so, the magnitude of the update to $\theta _{1}$ is even smaller, so you can take small step like so. And as gradient descent runs, you will automatically take smaller and smaller steps, until eventually you are taking very small steps, you know, and you find the converge to the local minimum. So, just recap, in gradient descent, as we approach the local minimum, gradient descent will automatically take smaller steps, and that’s because as we approach the local minimum, by definition, local minimum is when you have this derivative equal to zero. So, as we approach the local minimum, this derivative team will automatically get smaller, and so gradient descent will automatically take smaller step. So this is what gradient descent looks like, and so actually there is no need to decrease $\alpha$ overtime.
So, that's the gradient descent algorithm, and you can use it to minimize, to try to minimize any cost function J, not the cost function J to be defined for linear regression.
In the next video (article), we're going to take the function J, and set that back to be exactly linear regression's cost function, the square cost function that we came up earlier. And taking gradient descent, and the square cost function, and putting them together. That will give us our first learning algorithm, that'll give us our linear regression algorithm.
<end>
展开全文
• 九度 OJ1162 I wanna go home 时间限制：1 秒 内存限制：32 兆 特殊判题：否 1.题目描述： The country is facing a terrible civil war----cities in the country are divided into two parts supporting ...
• I Wanna Be the Guy time limit per test1 second memory limit per test256 megabytes inputstandard input outputstandard output There is a game called “I Wanna Be the Guy”, consisting of n levels. Litt...
•  The second line contains one integer M (0), which is the number of roads.  The following M lines are the information of the roads. Each line contains three integers A, B and T, which means the ...
• 牛客_BFC_I_Wanna_Go_Home_C++_by_一把健 题目描述 ​ The country is facing a terrible civil war----cities in the country are divided into two parts supporting different leaders. As a merchant, Mr. M ...
• Problem hereProblemThere is a game called “I Wanna Be the Guy”, consisting of n levels. Little X and his friend Little Y are addicted to the game. Each of them wants to pass the whole game. Little ...
•  For each test case, output one integer representing the minimum time to reach home.  If it is impossible to reach home according to Mr. M's demands, output -1 instead. ...
• I Wanna Be the Guy time limit per test 1 second memory limit per test 256 megabytes input standard input output standard output There is a game called "I
• There is a game called “I Wanna Be the Guy”, consisting of n levels. Little X and his friend Little Y are addicted to the game. Each of them wants to pass the whole game. Little X can pass only p ...
• And what we don't wanna do is to, you know, how to write software, to plot out this point, and then try to manually read off the numbers, that is not a good way to do it. And in fact, we'll see it ...
• 文章目录一、 I Wanna Be the Guy总结 一、 I Wanna Be the Guy 本题链接： I Wanna Be the Guy 题目： A. I Wanna Be the Guy time limit per test1 second memory limit per test256 megabytes inputstandard ...
• The second line contains one integer M (0), which is the number of roads. The following M lines are the information of the roads. Each line contains three integers A, B and T, which means the road ...
• We don’t wanna work! Input: Standard Input Time Limit: See AtCoder ACM is an organization of programming contests. The purpose of ACM does not matter to you. The only important thing
•  The second line contains one integer M (0), which is the number of roads.  The following M lines are the information of the roads. Each line contains three integers A, B and T, which means the road...
• The second line contains one integer M (0), which is the number of roads. The following M lines are the information of the roads. Each line contains three integers A, B and T, which means the road...
• 构造题,前面十几个手工处理.... n很大时有很多构造方法,一阵...I Wanna Become A 24-Point Master Time Limit: 2000/1000 MS (Java/Others) Memory Limit: 65536/65536 K (Java/Others) Total Submission(s): 128
•  The second line contains one integer M (0), which is the number of roads.  The following M lines are the information of the roads. Each line contains three integers A, B and T, which means the road...
• 昨天早晨起床，舍友说我晚上说梦话了。我问内容。——“我要找女朋友`zZzZ...” 这是实话。
• The second line contains one integer M (0), which is the number of roads. The following M lines are the information of the roads. Each line contains three integers A, B and T, which means the road...
•  The second line contains one integer M (0), which is the number of roads.  The following M lines are the information of the roads. Each line contains three integers A, B and T, which means the road...

...