date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2018/03/18
| 876 | 3,434 |
<issue_start>username_0: I am trying to deploy a helm chart which uses `PersistentVolumeClaim` and `StorageClass` to dynamically provision the required sotrage. This works as expected, but I can't find any configuration which allows a workflow like
```
helm delete xxx
# Make some changes and repackage chart
helm install --replace xxx
```
I don't want to run the release constantly, and I want to reuse the storage in deployments in the future.
Setting the storage class to `reclaimPolicy: Retain` keeps the disks, but helm will delete the PVC and orphan them. Annotating the PVC's so that helm does not delete them fixes this problem, but then running install causes the error
```
Error: release xxx failed: persistentvolumeclaims "xxx-xxx-storage" already exists
```
I think I have misunderstood something fundamental to managing releases in helm. Perhaps the volumes should not be created in the chart at all.<issue_comment>username_1: [PersistenVolumeClain](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) creating just a mapping between your actual [PersistentVolume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) and your pod.
Using `"helm.sh/resource-policy": keep` annotation for PV is not the best idea, because of that remark in a [documentation](https://github.com/kubernetes/helm/blob/master/docs/charts_tips_and_tricks.md#tell-tiller-not-to-delete-a-resource):
>
> The annotation "helm.sh/resource-policy": keep instructs Tiller to skip this resource during a helm delete operation. However, this resource becomes orphaned. Helm will no longer manage it in any way. This can lead to problems if using helm install --replace on a release that has already been deleted, but has kept resources.
>
>
>
If you will create a PV manually after you will delete your release, Helm will remove PVC, which will be marked as "Available" and on next deployment, it will reuse it. Actually, you don't need to keep your PVC in the cluster to keep your data. But, for making it always using the same PV, you need to use [labels and selectors](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/).
For keep and reuse volumes you can:
1. Create PersistenVolume with the label, as an example, `for_app=my-app` and set "Retain" policy for that volume like this:
```
apiVersion: v1
kind: PersistentVolume
metadata:
name: myappvolume
namespace: my-app
labels:
for_app: my-app
spec:
persistentVolumeReclaimPolicy: Retain
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
```
2. Modify your PersistenVolumeClaim configuration in Helm. You need to add a selector for using only PersistenVolumes with a label `for_app=my-app`.
```
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myappvolumeclaim
namespace: my-app
spec:
selector:
matchLabels:
for_app: my-app
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
```
So, now your application will use the same volume each time when it started.
But, please keep in mind, you may need to use selectors for other apps in the same namespace for preventing using your PV by them.
Upvotes: 4 <issue_comment>username_2: Actually, I'd suggest using StateFul sets and VolumeClaimTemplates:
<https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/>
The example there should speak for itself..
Upvotes: 0
|
2018/03/18
| 2,969 | 10,316 |
<issue_start>username_0: In c code. I have an input file (called in) that is a mad-lib in the format, "I have really < adjective> eyes" (no spaces inside the <>) and I want to write a bool function that uses scanf to read every word and return true if the word begins with '<' (also called a token) How would I go about doing that? and yes I have to use scanf. Here is what I have right now but I do not think that it is completely right, so another question is, how do I know if my function is properly working.
```
/* istoken = returns true if word is a token */
bool istoken(char word[]) {
char first;
int firstindex;
while (1) {
scanf("%s", word);
first = word[MAX_LEN];
firstindex = (int)strlen(word);
if (first == '<') {
printf("The token is: %s\n", first);
return true; }
else {
return false; }
}
}
```<issue_comment>username_1: In the caller, `word` must be sufficienly sized to hold the largest word in your text (+3 chars, 2 for `<,>` and the *nul-termanting* character. You should pass the maximum length for `word` as a parameter to `istoken`, but since you are using `scanf`, you must hard-code the *field width* modifier to protect your array bounds. (that is one of the reasons `fgets` is recommended over `scanf` -- but you must use `scanf`). Don't skimp on buffer size for `word` in the caller. Something like the following should suffice in the caller (probably `main()` for you):
```
#define MAXC 1024
...
char word[MAXC] = "";
```
There is no need for `first` or `firstindex`. To check the first character in a string, all you need do is *dereference* the pointer. With that, it is simply a matter of:
```
/* istoken = returns true if word is a token */
bool istoken (char *word) {
while (scanf("%1023s", word) == 1) /* did a valid read take place? */
if (*word == '<') /* is 1st char '<' ? */
return true; /* return true */
return false; /* out of words, return false */
}
```
(**note:** simply returning the token in `word` via the pointer parameter while returning `bool`, seems a bit of an awkward factoring of your code -- but it is doable. Also, if the token exceeds `1024` chars, including the *nul-terminating* char -- you will not have a complete token in `word` on function return)
Look things over and let me know if you have further questions.
---
**A Short Example Reading `stdin`**
```
#include
#include
#define MAXC 1024
/\* istoken = returns true if word is a token \*/
bool istoken (char \*word) {
while (scanf("%1023s", word) == 1) /\* did a valid read take place? \*/
if (\*word == '<') /\* is 1st char '<' ? \*/
return true; /\* return true \*/
return false; /\* out of words, return false \*/
}
int main (void) {
char word[MAXC] = "";
if (istoken (word))
printf ("found token: '%s'\n", word);
else
fprintf (stderr, "error: no token found.\n");
return 0;
}
```
**Example Use/Output**
```
$ echo "my dog has many fleas." | ./bin/scanftoken
found token: ''
```
Last note: while you, as you propose in the comment below, can output the token from within `intoken`, e.g.
```
bool istoken(char word[]) {
while (scanf("%100s", word) == 1) {
if (word[0] == '<') {
printf("the token is: %s\n", word);
return true;
}
}
return false;
}
```
That is generally something you want to avoid. Within your *program design* you want (as a goal) to separate your implementation (what your program does, computes, etc..) from Input/Output. That makes your code usable when called by more than one function that wants to output `printf("the token is: %s\n", word);`
While a bit uncommon, your `istoken` function that locates a token and returns `true/false` makes more sense if the caller then uses that return to determine what to do with the token in `word`. If you are just going to print it from inside `intoken` if a token is found, and then do nothing with the return in the caller, then why declare it as `bool` anyway -- you may as well just declare it as `void` if you are not using the return.
Like I said this is (a goal). You can factor your code any way you like as long as it is valid code. The use of `printf` within `istoken` is perfectly valid for temporary debugging purposes as well. (in fact that is one of the most helpful debugging tools you have, just sprinkle temporary `printf` statements throughout the logic path in your program to find out where you code works as intended and where the "train-falls-off-the-track" so to speak.
---
**Example with File I/O**
OK, we are finally getting to `'Z'` with this `'XY'` problem. Since, as I now understand, you have your text in a file (I have used `"myfile.txt"` for the input) and you want to read your inputfile in `istoken` and return `word` and `true/false` to `main()` and if `true` then write the token to your output file (I used "tokenfile.txt" before for my output file), then what you need to do is open both your input file and output file using `fopen` in `main()` similar to the following:
```
FILE *ifp = fopen ("myfile.txt", "r"), /* infile pointer */
*ofp = fopen ("tokenfile.txt", "w"); /* outfile pointer */
```
(I'm not that creative, I just use `ifp` for the *input file pointer* and `ofp` for the *output file pointer*)
Whenever you open a file, **before** you attempt to read or write to the file, you must **validate** that the file is actually open for reading or writing (e.g. `fopen` succeeded). For example:
```
if (ifp == NULL) { /* validate input open for reading */
perror ("fopen-myfile.txt");
return 1;
}
if (ofp == NULL) { /* validate output open for writing */
perror ("fopen-tokenfile.txt");
return 1;
}
```
Now with both files open, you can call `istoken` and read from `ifp`. However, this takes modifying `istoken` to take a `FILE *` parameter for use with `fscanf` instead of using `scanf`. For example:
```
/* istoken = returns true if word is a token */
bool istoken (FILE *ifp, char *word) {
while (fscanf(ifp, "%1023s", word) == 1) /* valid read take place? */
if (*word == '<') /* is 1st char '<' ? */
return true; /* return true */
return false; /* out of words */
}
```
After the return of `istoken`, you can write to `stdout` to let the user know if a token was found and also write to `ofp` to store token in your output file, e.g..
```
if (istoken (ifp, word)) { /* call istoken passing open ifp */
printf ("found token: '%s'\n", word); /* output token */
fprintf (ofp, "%s\n", word); /* write token to outfile */
}
else
fprintf (stderr, "error: no token found.\n");
```
Lastly, you must `fclose` the files you have open. **But there is a twist** for files you **write** to. You should **validate** the `fclose` to insure a stream-error did not occur on `ofp` that may not have been otherwise caught. e.g.
```
fclose (ifp); /* close infile pointer */
if (fclose(ofp) == EOF) /* validate "close-after-write" */
perror ("stream error on outfile stream close");
```
Putting it altogether, you can do something like the following:
```
#include
#include
#define MAXC 1024
/\* istoken = returns true if word is a token \*/
bool istoken (FILE \*ifp, char \*word) {
while (fscanf(ifp, "%1023s", word) == 1) /\* valid read take place? \*/
if (\*word == '<') /\* is 1st char '<' ? \*/
return true; /\* return true \*/
return false; /\* out of words \*/
}
int main (void) {
char word[MAXC] = "";
FILE \*ifp = fopen ("myfile.txt", "r"), /\* infile pointer \*/
\*ofp = fopen ("tokenfile.txt", "w"); /\* outfile pointer \*/
if (ifp == NULL) { /\* validate input open for reading \*/
perror ("fopen-myfile.txt");
return 1;
}
if (ofp == NULL) { /\* validate output open for writing \*/
perror ("fopen-tokenfile.txt");
return 1;
}
if (istoken (ifp, word)) { /\* call istoken passing open ifp \*/
printf ("found token: '%s'\n", word); /\* output token \*/
fprintf (ofp, "%s\n", word); /\* write token to outfile \*/
}
else
fprintf (stderr, "error: no token found.\n");
fclose (ifp); /\* close infile pointer \*/
if (fclose(ofp) == EOF) /\* validate "close-after-write" \*/
perror ("stream error on outfile stream close");
return 0;
}
```
**Example Input File**
```
$ cat myfile.txt
my dog has many fleas.
```
**Example Use/Output**
```
$ ./bin/scanftoken
found token: ''
$ cat tokenfile.txt
```
The best advice I can give you on learning C is to simply *slow down*. There is a lot to learn, and in fact given 30 years, I have barely scratched the surface (that and they keep revising the standard every so often). Just take it a step at a time. Loop up the `man page` for each function you use, find out what the proper parameter are and most critically what it returns and what form of error reporting is has (e.g. does it set `errno` so you can use `perror` to report the error or do you need to use `fprintf (stderr, ....)`?
Always enable *compiler warnings* and read and understand the warning and **do not accept code until it compiles without warning**. You can learn a lot of C just by listening to what your compiler is telling you. And if all else fails... talk to the duck. [**How to debug small programs**](https://ericlippert.com/2014/03/05/how-to-debug-small-programs/), really, it helps `:)`
Upvotes: 3 <issue_comment>username_2: If you mind using some powerful lexical analyzer, I suggest you using `flex` which can help you lot for tokenization.
As you can see, [Flex](https://en.wikipedia.org/wiki/Flex_(lexical_analyser_generator)) let you write token pattern and generate a `C` parser which does all the work.

>
> Here is a program which compresses multiple blanks and tabs down to a single blank, and throws away whitespace found at the end of a line:
>
>
>
```
%%
[ \t]+ putchar( ' ' );
[ \t]+$ /* ignore this token */
```
You can find more at <http://alumni.cs.ucr.edu/~lgao/teaching/flex.html>.
Upvotes: 0
|
2018/03/18
| 3,232 | 11,200 |
<issue_start>username_0: I was trying execute this PHP file that I have in XAMPP. I am trying to update the data to the database through a form, when I enter the info and give submit button the page just reloads or I get the message all fields are required though I fill up all. Give an it a check to the code. The data I enter just doesn't get updated to the database. I've checked with the database name and table name that if I am mentioning it properly and also I've tried just checking that if I am able to connect to the database. In that case it is getting connected to the database properly.but just not updating the info the table.
```
#### Donor?
I agree to the terms and conditions
Submit Request
php
if(isset($_POST["submit"])){
$db= 'bank';
if(!empty($_POST['name']) && !empty($_POST['lname']) && !empty($_POST['city']) && !empty($_POST['district']) && !empty($_POST['state']) && !empty($_POST['Phone']) && !empty($_POST['p1']) && !empty($_POST['pincode']) && !empty($_POST['bg']) && !empty($_POST['add1']) && !empty($_POST['add2']))
{ //added
$name=$_POST['name'];
$lname=$_POST['lname'];
$city= $_POST['city'];
$district= $_POST['district'];
$state= $_POST['state'];
$Phone= $_POST['Phone'];
$p1= $_POST['p1'];
$pincode= $_POST['pincode'];
$bg= $_POST['bg'];
$add1= $_POST['add1'];
$add2= $_POST['add2'];
$con=mysqli_connect('localhost','root','',$db) or die(mysql_error());
$sql="INSERT INTO donor VALUES('$name','$lname','$city','$district','$state','$Phone','$p1','$pincode','$bg','$add1','$add2')";
$result=mysqli_query($con,$sql);
if($result){
echo "Account Successfully Created";
} else {
echo "Failure!";
}
}
else {
echo "All fields are required!";
}
}
?
```<issue_comment>username_1: In the caller, `word` must be sufficienly sized to hold the largest word in your text (+3 chars, 2 for `<,>` and the *nul-termanting* character. You should pass the maximum length for `word` as a parameter to `istoken`, but since you are using `scanf`, you must hard-code the *field width* modifier to protect your array bounds. (that is one of the reasons `fgets` is recommended over `scanf` -- but you must use `scanf`). Don't skimp on buffer size for `word` in the caller. Something like the following should suffice in the caller (probably `main()` for you):
```
#define MAXC 1024
...
char word[MAXC] = "";
```
There is no need for `first` or `firstindex`. To check the first character in a string, all you need do is *dereference* the pointer. With that, it is simply a matter of:
```
/* istoken = returns true if word is a token */
bool istoken (char *word) {
while (scanf("%1023s", word) == 1) /* did a valid read take place? */
if (*word == '<') /* is 1st char '<' ? */
return true; /* return true */
return false; /* out of words, return false */
}
```
(**note:** simply returning the token in `word` via the pointer parameter while returning `bool`, seems a bit of an awkward factoring of your code -- but it is doable. Also, if the token exceeds `1024` chars, including the *nul-terminating* char -- you will not have a complete token in `word` on function return)
Look things over and let me know if you have further questions.
---
**A Short Example Reading `stdin`**
```
#include
#include
#define MAXC 1024
/\* istoken = returns true if word is a token \*/
bool istoken (char \*word) {
while (scanf("%1023s", word) == 1) /\* did a valid read take place? \*/
if (\*word == '<') /\* is 1st char '<' ? \*/
return true; /\* return true \*/
return false; /\* out of words, return false \*/
}
int main (void) {
char word[MAXC] = "";
if (istoken (word))
printf ("found token: '%s'\n", word);
else
fprintf (stderr, "error: no token found.\n");
return 0;
}
```
**Example Use/Output**
```
$ echo "my dog has many fleas." | ./bin/scanftoken
found token: ''
```
Last note: while you, as you propose in the comment below, can output the token from within `intoken`, e.g.
```
bool istoken(char word[]) {
while (scanf("%100s", word) == 1) {
if (word[0] == '<') {
printf("the token is: %s\n", word);
return true;
}
}
return false;
}
```
That is generally something you want to avoid. Within your *program design* you want (as a goal) to separate your implementation (what your program does, computes, etc..) from Input/Output. That makes your code usable when called by more than one function that wants to output `printf("the token is: %s\n", word);`
While a bit uncommon, your `istoken` function that locates a token and returns `true/false` makes more sense if the caller then uses that return to determine what to do with the token in `word`. If you are just going to print it from inside `intoken` if a token is found, and then do nothing with the return in the caller, then why declare it as `bool` anyway -- you may as well just declare it as `void` if you are not using the return.
Like I said this is (a goal). You can factor your code any way you like as long as it is valid code. The use of `printf` within `istoken` is perfectly valid for temporary debugging purposes as well. (in fact that is one of the most helpful debugging tools you have, just sprinkle temporary `printf` statements throughout the logic path in your program to find out where you code works as intended and where the "train-falls-off-the-track" so to speak.
---
**Example with File I/O**
OK, we are finally getting to `'Z'` with this `'XY'` problem. Since, as I now understand, you have your text in a file (I have used `"myfile.txt"` for the input) and you want to read your inputfile in `istoken` and return `word` and `true/false` to `main()` and if `true` then write the token to your output file (I used "tokenfile.txt" before for my output file), then what you need to do is open both your input file and output file using `fopen` in `main()` similar to the following:
```
FILE *ifp = fopen ("myfile.txt", "r"), /* infile pointer */
*ofp = fopen ("tokenfile.txt", "w"); /* outfile pointer */
```
(I'm not that creative, I just use `ifp` for the *input file pointer* and `ofp` for the *output file pointer*)
Whenever you open a file, **before** you attempt to read or write to the file, you must **validate** that the file is actually open for reading or writing (e.g. `fopen` succeeded). For example:
```
if (ifp == NULL) { /* validate input open for reading */
perror ("fopen-myfile.txt");
return 1;
}
if (ofp == NULL) { /* validate output open for writing */
perror ("fopen-tokenfile.txt");
return 1;
}
```
Now with both files open, you can call `istoken` and read from `ifp`. However, this takes modifying `istoken` to take a `FILE *` parameter for use with `fscanf` instead of using `scanf`. For example:
```
/* istoken = returns true if word is a token */
bool istoken (FILE *ifp, char *word) {
while (fscanf(ifp, "%1023s", word) == 1) /* valid read take place? */
if (*word == '<') /* is 1st char '<' ? */
return true; /* return true */
return false; /* out of words */
}
```
After the return of `istoken`, you can write to `stdout` to let the user know if a token was found and also write to `ofp` to store token in your output file, e.g..
```
if (istoken (ifp, word)) { /* call istoken passing open ifp */
printf ("found token: '%s'\n", word); /* output token */
fprintf (ofp, "%s\n", word); /* write token to outfile */
}
else
fprintf (stderr, "error: no token found.\n");
```
Lastly, you must `fclose` the files you have open. **But there is a twist** for files you **write** to. You should **validate** the `fclose` to insure a stream-error did not occur on `ofp` that may not have been otherwise caught. e.g.
```
fclose (ifp); /* close infile pointer */
if (fclose(ofp) == EOF) /* validate "close-after-write" */
perror ("stream error on outfile stream close");
```
Putting it altogether, you can do something like the following:
```
#include
#include
#define MAXC 1024
/\* istoken = returns true if word is a token \*/
bool istoken (FILE \*ifp, char \*word) {
while (fscanf(ifp, "%1023s", word) == 1) /\* valid read take place? \*/
if (\*word == '<') /\* is 1st char '<' ? \*/
return true; /\* return true \*/
return false; /\* out of words \*/
}
int main (void) {
char word[MAXC] = "";
FILE \*ifp = fopen ("myfile.txt", "r"), /\* infile pointer \*/
\*ofp = fopen ("tokenfile.txt", "w"); /\* outfile pointer \*/
if (ifp == NULL) { /\* validate input open for reading \*/
perror ("fopen-myfile.txt");
return 1;
}
if (ofp == NULL) { /\* validate output open for writing \*/
perror ("fopen-tokenfile.txt");
return 1;
}
if (istoken (ifp, word)) { /\* call istoken passing open ifp \*/
printf ("found token: '%s'\n", word); /\* output token \*/
fprintf (ofp, "%s\n", word); /\* write token to outfile \*/
}
else
fprintf (stderr, "error: no token found.\n");
fclose (ifp); /\* close infile pointer \*/
if (fclose(ofp) == EOF) /\* validate "close-after-write" \*/
perror ("stream error on outfile stream close");
return 0;
}
```
**Example Input File**
```
$ cat myfile.txt
my dog has many fleas.
```
**Example Use/Output**
```
$ ./bin/scanftoken
found token: ''
$ cat tokenfile.txt
```
The best advice I can give you on learning C is to simply *slow down*. There is a lot to learn, and in fact given 30 years, I have barely scratched the surface (that and they keep revising the standard every so often). Just take it a step at a time. Loop up the `man page` for each function you use, find out what the proper parameter are and most critically what it returns and what form of error reporting is has (e.g. does it set `errno` so you can use `perror` to report the error or do you need to use `fprintf (stderr, ....)`?
Always enable *compiler warnings* and read and understand the warning and **do not accept code until it compiles without warning**. You can learn a lot of C just by listening to what your compiler is telling you. And if all else fails... talk to the duck. [**How to debug small programs**](https://ericlippert.com/2014/03/05/how-to-debug-small-programs/), really, it helps `:)`
Upvotes: 3 <issue_comment>username_2: If you mind using some powerful lexical analyzer, I suggest you using `flex` which can help you lot for tokenization.
As you can see, [Flex](https://en.wikipedia.org/wiki/Flex_(lexical_analyser_generator)) let you write token pattern and generate a `C` parser which does all the work.

>
> Here is a program which compresses multiple blanks and tabs down to a single blank, and throws away whitespace found at the end of a line:
>
>
>
```
%%
[ \t]+ putchar( ' ' );
[ \t]+$ /* ignore this token */
```
You can find more at <http://alumni.cs.ucr.edu/~lgao/teaching/flex.html>.
Upvotes: 0
|
2018/03/18
| 458 | 1,381 |
<issue_start>username_0: In SQLite, how to compare date without passing timestamp?
The date format is `2018-03-18 08:24:46.101655+00` and I want to compare against only date part as `2018-03-18`.
I have tried as where `mydate='2018-03-18'` but that didn't return any records.
Similarly, tried `Date(mydate)='2018-03-18'` but that didn't help either.
How can I compare dates ignoring the timestamp part?<issue_comment>username_1: Try using strftime
```
SELECT strftime('%Y %m %d', 'columnName');
```
you can find it here [strftime.php](https://www.techonthenet.com/sqlite/functions/strftime.php)
Upvotes: 0 <issue_comment>username_2: ```
select * from mytable
where strftime('%Y-%m-%d', mydate) = '2018-03-18'
```
Upvotes: 1 <issue_comment>username_3: This is not one of the supported [date formats](http://www.sqlite.org/datatype3.html#datetime).
To extract the date part from the string, use [substr()](http://www.sqlite.org/lang_corefunc.html#substr):
```
... WHERE substr(mydate, 1, 10) = '2018-03-18'
```
It might be a better idea to store dates in a correct format in the database to begin with.
Upvotes: 1 <issue_comment>username_4: It is looking that there is problem with date format.
Sqlite doesn't understand data like '+00' in date.
So date() and strftime() will not work here if data type is 'timestamp with time zone'.
Try by using like clause.
Upvotes: 1
|
2018/03/18
| 955 | 2,849 |
<issue_start>username_0: Yes: I know that we shouldn't be using global variables in Python but I'm trying to understand this behavior.
I have this file called bug.py :
```
x = 0
def foo():
global x
x = 100
if __name__ == '__main__':
foo()
print(x)
```
when I execute this as a file I get the expected result of a 100, see below.
```
(mani) franz@ubuntu:~/dropboxpython/poolparty$ python bug.py
100
```
However, when I do the same thing in the repl, x doesn't turn 100, see below
```
(mani) franz@ubuntu:~/dropboxpython/poolparty$ python
Python 3.6.4 | packaged by conda-forge | (default, Dec 23 2017, 16:31:06)
[GCC 4.8.2 20140120 (Red Hat 4.8.2-15)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from bug import *
>>> x
0
>>> foo()
>>> x
0
```
Why does this happen?<issue_comment>username_1: Let's recap what is the `from module import something` statement doing:
>
> a reference to that value is stored in the local namespace, using the
> name in the as clause if it is present, otherwise using the attribute
> name
>
>
>
I'd like to add also that `module` is imported (i.e. added to `sys.modules`), but the name `module` is not created.
The second important point is that integers are immutable. Immutable objects behave like this, because each value is a separate object:
```
a = 0 # a-->0
b = a # a-->0<--b
a = 3 # 3<--a 0<--b; new object int(3) is created; b is still 0
```
So what is happeing is the import creates a local `x` initialized to `x` from `bug` which is zero. Calling `foo()` changes `x` in the `bug` module, but as shown above, it cannot affect the local `x`.
---
Try this to see the difference between immutable `x` and mutable `y`:
```
x = 0
y = [0]
def foo():
global x, y
x = 100
y[0] = 100
if __name__ == '__main__':
foo()
print(x)
print(y)
```
```
>>> from bug import *
>>> x, y
(0, [0])
>>> foo()
>>> x, y
(0, [100])
>>>
```
UPDATE: checking the `x` set by `foo`.
```
>>> import sys
>>> sys.modules['bug'].x
100
```
Upvotes: 2 <issue_comment>username_2: *Globals in Python are global to a module, not across all modules.* If you want to use `x` from the module bug then use it by importing `bug` module like below.
```
>>> import bug
>>> bug.x
0
>>> bug.foo()
>>> bug.x
100
>>>
```
Don't use a `from bug import *`. This will create a new variable `x` initialized to whatever `bug.x` referred to at the time of the import. If you assign x=50 in bug.py file then you will get 50 after executing statement `from bug import *` like below.
```
>>> from bug import *
>>> x
50
>>>
```
This new variable `x` would not be affected by assignments from the method `foo()`. This is the reason second time it won't gives you value 100 of variable x.
I tried to simplify it. Hope this will help :)
Upvotes: 1
|
2018/03/18
| 1,059 | 3,814 |
<issue_start>username_0: I'm trying to publish my own Android library to Bintray but when I uploaded from gradle successfully. I always get unexpected result from Bintray. It looks like this
[](https://i.stack.imgur.com/ROrTY.png)
And this is my build.gradle
```html
apply plugin: 'com.android.library'
apply plugin: 'kotlin-android'
apply plugin: 'kotlin-android-extensions'
apply plugin: 'com.jfrog.bintray'
apply plugin: 'com.github.dcendents.android-maven'
// for Bintray
def projectVersionCodeNr = Integer.parseInt(projectVersionCode);
def libGit = libGit
def libUrl = libUrl
def libDescription = libDescription
def libGroupId = libGroupId
def libArtifactId = libArtifactId
android {
compileSdkVersion 26
buildToolsVersion "27.0.3"
defaultConfig {
minSdkVersion 19
targetSdkVersion 26
versionCode 1
versionName "1.0"
testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner"
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
}
}
dependencies {
implementation fileTree(dir: 'libs', include: ['*.jar'])
implementation"org.jetbrains.kotlin:kotlin-stdlib-jre7:$kotlin_version"
implementation 'com.nostra13.universalimageloader:universal-image-loader:1.9.5'
}
install {
repositories.mavenInstaller {
pom.project {
name libArtifactId
description libDescription
url libUrl
inceptionYear '2018'
packaging 'aar'
groupId libGroupId
artifactId libArtifactId
version '1.0.1'
licenses {
license {
name 'MIT'
url libLicenseUrl
}
}
}
}
}
bintray {
user = bintray_user
key = bintray_apikey
pkg {
repo = libGroupId
name = libArtifactId
userOrg = bintray_user_org
licenses = ['MIT']
vcsUrl = libGit
version {
name = '1.0.1'
vcsTag = '1.0.1'
}
}
configurations = ['archives']
}
```
What I want to have is others can download my libray just simply using
```html
compile 'com.test.sdk:mylib:1.0.1'
```
Can everyone support me to resolve my problem? Thank you<issue_comment>username_1: I have a guide how to push Android Library to Bintray, you can try with my guide. I have already uploaded 2 libraries by this way.
[Link library to bintray](https://github.com/tntkhang/library-to-bintray)
Upvotes: 2 [selected_answer]<issue_comment>username_2: Check Your POM Files dude. i have a same problem
and here's what i found in my POM files
```
4.0.0
com.pmberjaya.library
ticketing-report-adapter
unspecified
```
you should add filesSpec in ur gradle inside bintray
```
filesSpec {
from 'ticketing-report-adapter/build/outputs/aar/ticketing-report-adapter-release.aar'
into '.'
version '1.0'
}
```
FULL
```
bintray {
user = "bintray.user"
key = "bintray.key"
configurations = ['archives']
pkg {
repo = "ticketing-report-adapter"
name = "sdk"
version {
name = '1.0'
desc = 'Ticketing Upload SDK'
released = new Date()
vcsTag = '1.0'
}
licenses = ['Apache-2.0']
vcsUrl = "https://gitlab.com/exelstaderlin/ticketing-android.git"
websiteUrl = "https://gitlab.com/exelstaderlin/ticketing-android.git"
}
filesSpec {
from 'ticketing-report-adapter/build/outputs/aar/ticketing-report-adapter-release.aar'
into '.'
version '1.0'
}
```
}
Upvotes: 0
|
2018/03/18
| 472 | 1,618 |
<issue_start>username_0: I'm trying to run a command on a bunch of files:
```
find . -name "*.ext" | xargs -0 cmd
```
The above works but it hangs because one of the folders stupidly has an ' in the file name (others have parens and other nonsense).
How do I safely send escaped output to my command? e.g.:
```
cmd foo\ bar\(baz\)\'\!
```
[edit] I know I can run `find ... -exec cmd {} \;` but the actual command I'm running is more complicated and being piped through `sed` first<issue_comment>username_1: If you have GNU `find` you can use the `-print0` option
```
find -name "*.ext" -print0 | xargs -0 cmd
```
Otherwise you would have to ditch `xargs`. If you have Bash you could use
```
find -name "*.ext" | while read -a list ; do cmd "${list[@]}" ; done
```
Note that you do not have to specify current directory as the starting point. If no starting is specified, `.` is assumed.
Upvotes: 2 [selected_answer]<issue_comment>username_2: You can use this `while` loop to process results of `find` command that uses `NUL` terminator using [process substitution](http://tldp.org/LDP/abs/html/process-sub.html):
```
while IFS= read -rd '' file; do
# echo "$file"
cmd
done < <(find -iname "*.ext" -print0)
```
This can handle filenames with all kind of whitespaces, glob characters, newlines or any other special characters.
Note that this requires `bash` as process substitution is not supported in bourne shell.
Upvotes: 2 <issue_comment>username_3: GNU Parallel was born exactly because of `xargs` way of dealing with ", ' and space:
```
find . -name "*.ext" | parallel cmd
```
Upvotes: 0
|
2018/03/18
| 457 | 1,622 |
<issue_start>username_0: I have set an operation hook on my sales table, but the operation not triggering when a new row insert in that table. My hook coke has given below. The database is MySQL.
```
module.exports = function (LiveSales) {
LiveSales.observe('after save', function (ctx, next) {
var socket = LiveSales.app.io;
console.log("New Item added");
if (ctx.isNewInstance) {
} else {
}
//Calling the next middleware..
next();
}); //after save..
}
```<issue_comment>username_1: If you have GNU `find` you can use the `-print0` option
```
find -name "*.ext" -print0 | xargs -0 cmd
```
Otherwise you would have to ditch `xargs`. If you have Bash you could use
```
find -name "*.ext" | while read -a list ; do cmd "${list[@]}" ; done
```
Note that you do not have to specify current directory as the starting point. If no starting is specified, `.` is assumed.
Upvotes: 2 [selected_answer]<issue_comment>username_2: You can use this `while` loop to process results of `find` command that uses `NUL` terminator using [process substitution](http://tldp.org/LDP/abs/html/process-sub.html):
```
while IFS= read -rd '' file; do
# echo "$file"
cmd
done < <(find -iname "*.ext" -print0)
```
This can handle filenames with all kind of whitespaces, glob characters, newlines or any other special characters.
Note that this requires `bash` as process substitution is not supported in bourne shell.
Upvotes: 2 <issue_comment>username_3: GNU Parallel was born exactly because of `xargs` way of dealing with ", ' and space:
```
find . -name "*.ext" | parallel cmd
```
Upvotes: 0
|
2018/03/18
| 633 | 2,422 |
<issue_start>username_0: I am trying to map a keyboard's key to touch a certain point on stage.
This is my current code but it doesn't crash or do anything.
```
InputEvent touch = new InputEvent();
touch.setType(InputEvent.Type.touchUp);
touch.setStageX(400);
touch.setStageY(200);
currentStage.getRoot().fire(touch); //this doesn't do anything
```
currentStage instance is created and set as InputProcessor. I have place a button on 400,200 to capture the event but the code above failed to do so.<issue_comment>username_1: You can use next methods of `Stage` for emulation of user engagement:
```
/**
* Applies a touch down event to the stage and returns true if an actor
* in the scene {@link Event#handle() handled} the event.
*/
public boolean touchDown(int screenX, int screenY, int pointer, int button)
/**
* Applies a touch moved event to the stage and returns true if an actor
* in the scene {@link Event#handle() handled} the
* event. Only {@link InputListener listeners} that returned true for
* touchDown will receive this event.
*/
public boolean touchDragged (int screenX, int screenY, int pointer)
/**
* Applies a touch up event to the stage and returns true if an actor
* in the scene {@link Event#handle() handled} the event.
* Only {@link InputListener listeners} that returned true for
* touchDown will receive this event.
*/
public boolean touchUp (int screenX, int screenY, int pointer, int button)
```
In your case:
```
Vector2 point = new Vector2(400, 300);
currentStage.stageToScreenCoordinates(point);
// this method works with screen coordinates
currentStage.touchDown((int)point.x, (int)point.y, 0, 0);
```
Upvotes: 0 <issue_comment>username_2: It seems like you are expecting the InputEvent to cause each actor in the heirarchy to be compared to the coordinates to determine if it responds. That's not how Stage handles input events.
When actual screen touches occur, Stage determines which actor was touched and fires the InputEvent directly on that actor. If you fire a manually-created InputEvent on the root, only the root has an opportunity to respond to it.
If you want to manually create an input event and let the stage figure out which actor to give it to, you can call `stage.hit()` and if it returns an Actor, that's the touchable Actor that was under the touched point, and you can fire the event on that Actor.
Upvotes: 2 [selected_answer]
|
2018/03/18
| 770 | 2,362 |
<issue_start>username_0: I'm given a decimal\_string e.g. 12.34, how would I get the values before and after the decimal point?
```
a = 12
b = 34
```
How do I get the values of `a` and `b`?<issue_comment>username_1: **Edit:** After reading comment "This seems to work but if decimal\_string is e.g. 0 or 1234 it gives nothing and I need it to be a 0. How would I do this?", I submit the following:
In the following code, we convert the string to a decimal, then:
1. the integer component of it is `a`
2. the digits after the integer component are `b`. We use the modulus `%` operator to get the remainder of division by `1` and then simply strip off
The `0.` from the beginning, converting it back to an `int` at the end.
Here is the code:
```
import decimal
try:
num = decimal.Decimal("12.34")
a = int(num)
b = int(str(num % 1)[2:])
except decimal.InvalidOperation:
a = None
b = None
```
---
**Original:** The Python string object has a partition method for this:
```
a,_,b = "12.34".partition('.')
```
Note the `_` is just a variable that will hold the partition, but isn't really used for anything. It could be anything, like `z`.
Another thing to note here is Tuple Unpacking... the partition method returns a tuple with len() of 3. Python will assign the three values to the respective 3 variables on the left.
Alternately, you could do this:
```
val = "12.34".partition('.')
val[0] # is the left side - 12
val[2] # is the right side - 34
```
Upvotes: 2 <issue_comment>username_2: Use `split` and `join` and then type cast to `int`:
```
s = '12.34'
a = int(''.join(s.split('.')[0])) # 12
b = int(''.join(s.split('.')[1])) # 34
```
Handling special cases (non-decimal strings):
```
s = '1234'
if s.find('.') != -1:
a = int(''.join(s.split('.')[0]))
b = int(''.join(s.split('.')[1]))
else:
a = int(s)
b = 0
print(a, b) # 1234 0
```
Upvotes: 2 <issue_comment>username_3: Quick and dirty solution without having to go into modulo.
```
a = int(12.34)
b = int(str(12.34)[3:])
```
int() will cut off decimal points **without** rounding.
As for the decimal points, turn the number into a string - that lets you manipulate it like an array. Grab the trailing numbers and convert them back to ints.
....
That said, the other answers are way cooler and better so you should go with them
Upvotes: -1
|
2018/03/18
| 963 | 2,968 |
<issue_start>username_0: I have this function here and I am struggling to figure out how the output is derived from this. Any help would be appreciated. thank you!
```
class A:
def __init__(self, a: int, b: [str]):
self._foo = a
self._bar = b
def get_foo(self):
return self._foo
def get_bar(self):
return self._bar
def do_that(given: A):
x = given.get_foo()
x += 10
y = given.get_bar()
y[0] += ' there'
y = ['cool']
given = A(-10, ['bye'])
x = A(1, ['boo'])
print(x.get_foo())
print(x.get_bar())
do_that(x)
print(x.get_foo())
print(x.get_bar())
```
Can someone explain why this is the output? Where does ['boo there'] come from and the 1 right before that?
```
1
['boo']
1
['boo there']
```<issue_comment>username_1: **Edit:** After reading comment "This seems to work but if decimal\_string is e.g. 0 or 1234 it gives nothing and I need it to be a 0. How would I do this?", I submit the following:
In the following code, we convert the string to a decimal, then:
1. the integer component of it is `a`
2. the digits after the integer component are `b`. We use the modulus `%` operator to get the remainder of division by `1` and then simply strip off
The `0.` from the beginning, converting it back to an `int` at the end.
Here is the code:
```
import decimal
try:
num = decimal.Decimal("12.34")
a = int(num)
b = int(str(num % 1)[2:])
except decimal.InvalidOperation:
a = None
b = None
```
---
**Original:** The Python string object has a partition method for this:
```
a,_,b = "12.34".partition('.')
```
Note the `_` is just a variable that will hold the partition, but isn't really used for anything. It could be anything, like `z`.
Another thing to note here is Tuple Unpacking... the partition method returns a tuple with len() of 3. Python will assign the three values to the respective 3 variables on the left.
Alternately, you could do this:
```
val = "12.34".partition('.')
val[0] # is the left side - 12
val[2] # is the right side - 34
```
Upvotes: 2 <issue_comment>username_2: Use `split` and `join` and then type cast to `int`:
```
s = '12.34'
a = int(''.join(s.split('.')[0])) # 12
b = int(''.join(s.split('.')[1])) # 34
```
Handling special cases (non-decimal strings):
```
s = '1234'
if s.find('.') != -1:
a = int(''.join(s.split('.')[0]))
b = int(''.join(s.split('.')[1]))
else:
a = int(s)
b = 0
print(a, b) # 1234 0
```
Upvotes: 2 <issue_comment>username_3: Quick and dirty solution without having to go into modulo.
```
a = int(12.34)
b = int(str(12.34)[3:])
```
int() will cut off decimal points **without** rounding.
As for the decimal points, turn the number into a string - that lets you manipulate it like an array. Grab the trailing numbers and convert them back to ints.
....
That said, the other answers are way cooler and better so you should go with them
Upvotes: -1
|
2018/03/18
| 1,715 | 4,660 |
<issue_start>username_0: I am fairly new to SQL. My table is
```
id mark datetimes
------|-----|------------
1001 | 10 | 2011-12-20
1002 | 11 | 2012-01-10
1005 | 12 | 2012-01-10
1003 | 10 | 2012-01-10
1004 | 11 | 2018-10-10
1006 | 12 | 2018-10-19
1007 | 13 | 2018-03-12
1008 | 15 | 2018-03-13
```
I need to select an ID with the highest mark at the end of each month (Year also matters) and ID can be repeated
My desired output would be
```
id mark
-----|----
1001 | 10
1005 | 12
1006 | 12
1008 | 15
```
So far I've Only able to get the highest value in each month
```
Select Max(Mark)'HighestMark'
From StudentMark
Group BY Year(datetimes), Month(datetimes)
```
When I tried to
```
Select Max(Mark)'HighestMark', ID
From StudentMark
Group BY Year(datetimes), Month(datetimes), ID
```
I get
```
Id HighestMark
----------- ------------
1001 10
1002 11
1003 12
1004 10
1005 11
1006 12
1007 13
1008 15
```<issue_comment>username_1: I don't see a way of doing this in a single query. But we can easily enough use one subquery to find the final mark in the month for each student, and another to find the student with the highest final mark.
```
WITH cte AS (
SELECT *,
ROW_NUMBER() OVER (PARTITION BY ID, CONVERT(varchar(7), datetimes, 126)
ORDER BY datetimes DESC) rn
FROM StudentMark
)
SELECT ID, Mark AS HighestMark
FROM
(
SELECT *,
RANK() OVER (PARTITION BY CONVERT(varchar(7), datetimes, 126)
ORDER BY Mark DESC) rk
FROM cte
WHERE rn = 1
) t
WHERE rk = 1
ORDER BY ID;
```
[Demo
----](http://rextester.com/ZHIBJ42366)
Upvotes: 1 <issue_comment>username_2: You can try like following.
**Using `ROW_NUMBER()`**
```
SELECT * FROM
(
SELECT *,
ROW_NUMBER() OVER(PARTITION BY YEAR(DATETIMES)
,MONTH(DATETIMES) ORDER BY MARK DESC) AS RN
FROM [MY_TABLE]
)T WHERE RN=1
```
**Using `WITH TIES`**
```
SELECT TOP 1 WITH TIES ID, mark AS HighestMarks
FROM [MY_TABLE]
ORDER BY ROW_NUMBER() OVER (PARTITION BY YEAR(datetimes)
,MONTH(datetimes) ORDER BY mark DESC)
```
**Example**:
```
WITH MY AS
(
SELECT
* FROM (VALUES
(1001 , 10 , '2011-12-20'),
(1002 , 11 , '2012-01-10'),
(1005 , 12 , '2012-01-10'),
(1003 , 10 , '2012-01-10'),
(1004 , 11 , '2018-10-10'),
(1006 , 12 , '2018-10-19'),
(1007 , 13 , '2018-03-12'),
(1008 , 15 , '2018-03-13')
) T( id , mark , datetimes)
)
SELECT ID,Mark as HighestMark FROM
(
SELECT *,
ROW_NUMBER() OVER(PARTITION BY YEAR(DATETIMES),MONTH(DATETIMES) ORDER BY MARK DESC) AS RN
FROM MY
)T WHERE RN=1
```
**Output:**
```
ID HighestMark
1001 10
1005 12
1008 15
1006 12
```
Upvotes: 2 [selected_answer]<issue_comment>username_3: Use `RANK` in case there are more than 1 student having the same highest mark.
```
select id, mark
from
(select *,
rank() over( partition by convert(char(7), datetimes, 111) order by mark desc) seqnum
from studentMark ) t
where seqnum = 1
```
Upvotes: 0 <issue_comment>username_4: In below query you have included **ID** column for Group By, because of this, it is considering all data for all ID.
```
Select Max(Mark)'HighestMark', ID From StudentMark Group BY Year(datetimes), Month(datetimes), ID
```
Remove ID column from this script and try again.
Upvotes: 0 <issue_comment>username_5: this should work:
```
select s.ID, t.Mark, t.[Month year] from Studentmark s
inner join (
Select
Max(Mark)'HighestMark'
,cast(Year(datetimes) as varchar(10)) +
cast(Month(datetimes) as varchar(10)) [month year]
From StudentMark
Group BY cast(Year(datetimes) as varchar(10))
+ cast(Month(datetimes) as varchar(10))) t on t.HighestMark = s.mark and
t.[month year] = cast(Year(s.datetimes) as varchar(10)) + cast(Month(s.datetimes) as varchar(10))
```
Upvotes: 0 <issue_comment>username_6: If for some reason you abhor subqueries, you can actually do this as:
```
select distinct
first_value(id) over (partition by year(datetimes), month(datetime) order by mark desc) as id
max(mark) over (partition by year(datetimes), month(datetime))
from StudentMark;
```
Or:
```
select top (1) with ties id, mark
from StudentMark
order by row_number() over (partition by year(datetimes), month(datetime) order by mark desc);
```
In this case, you can get all students in the event of ties by using `rank()` or `dense_rank()` instead of `row_number()`.
Upvotes: 0
|
2018/03/18
| 1,922 | 10,495 |
<issue_start>username_0: I have created a custom notification for Downloading an mp3 file from a given URL. But I need to know how to add `pause` and `cancel` button to the custom notification I created.
Here is the partial code for custom Notification :
```
if (!downloadUrl.toString().isEmpty()) {
DownloadManager.Request request = new DownloadManager.Request(Uri.parse(downloadUrl));
request.setMimeType("audio/MP3");
request.setTitle(vMeta.getTitle());
request.allowScanningByMediaScanner();
request.setNotificationVisibility(DownloadManager.Request.VISIBILITY_HIDDEN);
request.setDestinationInExternalPublicDir(storage, vMeta.getTitle() + extension);
final DownloadManager manager = (DownloadManager) getSystemService(Context.DOWNLOAD_SERVICE);
final long id = manager.enqueue(request);
registerReceiver(new DownloadReceiver(id, storage, vMeta.getTitle() + extension),
new IntentFilter(DownloadManager.ACTION_DOWNLOAD_COMPLETE));
mBuilder = new NotificationCompat.Builder(getApplicationContext());
Intent intent = new Intent();
PendingIntent pendingIntent = PendingIntent.getActivity(getApplicationContext(), 0, intent, 0);
mBuilder.setContentIntent(pendingIntent);
mBuilder.setSmallIcon(R.drawable.ic_music_video_white_24dp);
mBuilder.setContentTitle("Downloading");
mBuilder.setContentText(vMeta.getTitle());
mBuilder.setOngoing(false);
//mBuilder.addAction();
mNotificationManager = (NotificationManager) getSystemService(Context.NOTIFICATION_SERVICE);
new Thread(new Runnable() {
@Override
public void run() {
boolean downloading = true;
while (downloading) {
DownloadManager.Query q = new DownloadManager.Query();
q.setFilterById(id);
Cursor cursor = manager.query(q);
cursor.moveToFirst();
if( cursor != null && cursor.moveToFirst() ) {
bytes_downloaded = cursor.getInt(cursor
.getColumnIndex(DownloadManager.COLUMN_BYTES_DOWNLOADED_SO_FAR));
bytes_total = cursor.getInt(cursor.getColumnIndex(DownloadManager.COLUMN_TOTAL_SIZE_BYTES));
dl_progress = (int) ((bytes_downloaded * 100l) / bytes_total);
}
mNotificationManager.notify(001, mBuilder.build());
if (cursor.getInt(cursor.getColumnIndex(DownloadManager.COLUMN_STATUS)) == DownloadManager.STATUS_SUCCESSFUL) {
downloading = false;
mBuilder.setContentTitle("Download complete")
.setOngoing(false)
.setAutoCancel(true)
.setProgress(100,100,false);
mNotificationManager.notify(001, mBuilder.build());
}
runOnUiThread(new Runnable() {
@Override
public void run() {
mBuilder.setContentTitle("Downloading: "+dl_progress+"%");
mBuilder.setProgress(100,dl_progress,false);
mNotificationManager.notify(001, mBuilder.build());
}
});
cursor.close();
}
}
}).start();
}
}
}
}.extract(ytLink, true, true);
}
});
}
}
protected void onDestroy() {
super.onDestroy();
}
public void onBackPressed(){
super.onBackPressed();
}
public class DownloadReceiver extends BroadcastReceiver {
private long id;
private String dirType;
private String subPath;
public DownloadReceiver(long id, String dirType, String subPath) {
this.id = id;
this.dirType = dirType;
this.subPath = subPath;
}
@Override
public void onReceive(Context context, Intent intent) {
if (intent.getLongExtra(DownloadManager.EXTRA_DOWNLOAD_ID, -1) == id) {
MainActivity.this.unregisterReceiver(this);
File oldFile = new File(Environment.getExternalStoragePublicDirectory(dirType), subPath);
String newSubPath = subPath.substring(0, subPath.lastIndexOf('.')) +"|MEGA"+".mp3";
File newFile = new File(Environment.getExternalStoragePublicDirectory(dirType), newSubPath);
Boolean result = oldFile.renameTo(newFile);
Toast.makeText(context, "Download " + (result ? "succeeded" : "failed"), Toast.LENGTH_SHORT).show();
}
}
}
```<issue_comment>username_1: You need to set particular action to notification with media control you can add particular action with relative pending intent
```
.addAction(R.drawable.ic_prev, "Previous", prevPendingIntent)
.addAction(R.drawable.ic_pause, "Pause", pausePendingIntent)
.addAction(R.drawable.ic_next, "Next", nextPendingIntent)
```
You also need to set media style using below code
```
.setStyle(MediaNotificationCompat.MediaStyle()
.setShowActionsInCompactView(1 /* #1: pause button \*/)
.setMediaSession(mMediaSession.getSessionToken()))
```
You can also check [this link](https://developer.android.com/training/notify-user/expanded#media-style) for more instructions.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Refer to a simple example,everything is in the code:
```
public class MainActivity extends AppCompatActivity {
private DownloadReceiver downloadReceiver = null;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
init();
findViewById(R.id.button1).setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
// notifyFirst();
notifySecondNotification();
}
});
}
private void init() {
downloadReceiver = new DownloadReceiver();
IntentFilter intentFilter = new IntentFilter(DownloadReceiver.ACTION_1);
intentFilter.addAction(DownloadReceiver.ACTION_2);
registerReceiver(downloadReceiver, intentFilter);
}
private void notifySecondNotification() {
Intent button1I = new Intent(DownloadReceiver.ACTION_1);
PendingIntent button1PI = PendingIntent.getBroadcast(this, 0, button1I, 0);
Intent button2I = new Intent(DownloadReceiver.ACTION_2);
PendingIntent button2PI = PendingIntent.getBroadcast(this, 0, button2I, 0);
/*
* use RemoteViews to custom notification layout
* R.layout.notification
*/
RemoteViews remoteViews = new RemoteViews(getPackageName(), R.layout.notification);
/*
* bind click event
*/
remoteViews.setOnClickPendingIntent(R.id.notificationButton1, button1PI);
remoteViews.setOnClickPendingIntent(R.id.notificatinoButton2, button2PI);
Notification notification = new NotificationCompat.Builder(this)
.setTicker("tttttttttt")
.setContentTitle("setContentTitle")
.setContentText("setContentText")
.setSmallIcon(android.R.drawable.ic_menu_report_image)
/**
* set remoteViews
*/
.setContent(remoteViews)
.build();
NotificationManagerCompat notificationManager = NotificationManagerCompat.from(this);
notificationManager.notify(0, notification);
}
@Override
public void onDestroy() {
super.onDestroy();
unregisterReceiver(downloadReceiver);
}
}
```
your download receiver example:
```
public class DownloadReceiver extends BroadcastReceiver {
public static final String ACTION_1 = "Press11111";
public static final String ACTION_2 = "Press22222";
@Override
public void onReceive(Context context, Intent intent) {
String action = intent.getAction();
assert action != null;
String toastStr = "you touch";
if (action.equals(ACTION_1)) {
//download......
Toast.makeText(context, toastStr + "1111111", Toast.LENGTH_SHORT).show();
} else if (action.equals(ACTION_2)) {
//cancle download......
Toast.makeText(context, toastStr + "2222222", Toast.LENGTH_SHORT).show();
}
}
}
```
notification.xml
```
xml version="1.0" encoding="utf-8"?
```
Upvotes: 2
|
2018/03/18
| 841 | 3,559 |
<issue_start>username_0: One of the biggest issue we face now with parse-server is duplication. Although we have implemented a Parse cloud code to prevent such event through `beforeSave` and `afterSave` methods at the same time added external middleware to check for existing object before saving still we face duplication over and over specially on concurrent operations.
**Here is our code to prevent duplication for a specific class:**
```
Parse.Cloud.beforeSave("Category", function(request, response) {
var newCategory = request.object;
var name = newCategory.get("name");
var query = new Parse.Query("Category");
query.equalTo("name", name);
query.first({
success: function(results) {
if(results) {
if (!request.object.isNew()) { // allow updates
response.success();
} else {
response.error({errorCode:400,errorMsg:"Category already exist"});
}
} else {
response.success();
}
},
error: function(error) {
response.success();
}
});
});
Parse.Cloud.afterSave("Category", function(request) {
var query = new Parse.Query("Category");
query.equalTo("name", request.object.get("name"));
query.ascending("createdAt");
query.find({
success:function(results) {
if (results && results.length > 1) {
for(var i = (results.length - 1); i > 0 ; i--) {
results[i].destroy();
}
}
else {
// No duplicates
}
},
error:function(error) {
}
});
});
```
This code above is able to prevent some duplicate but most still goes through, example:
[](https://i.stack.imgur.com/ZwZzL.png)
What is the "ultimate way" to prevent duplication with Parse server?<issue_comment>username_1: You can always create a unique index in mongodb for the field that should be unique in your document.
This way any save that conflicts with that index, will be aborted
Upvotes: 4 [selected_answer]<issue_comment>username_2: Maybe you should write something with Promises like :
```
Parse.Cloud.beforeSave("Category", function (request, response) {
return new Promise((resolve, reject) => {
var query = new Parse.Query("Category");
query.equalTo("name", "Dummy");
return query.first().then(function (results) {
resolve(); // or reject()
});
})
});
Parse.Cloud.beforeSave("Category", async (request) => {
(...)
await results = query.first();
// then your logic here
response.success();
response.error({ errorCode: 400, errorMsg: "Category already exist" })
})
```
Upvotes: 0 <issue_comment>username_3: Here is my Solution:
```js
Parse.Cloud.beforeSave( 'ClassName', async ( request ) => {
const columnName = 'columnName'
const className = 'ClassName'
if( request.object.isNew() ) {
var newCategory = request.object
var name = newCategory.get( columnName )
var query = new Parse.Query( className )
query.equalTo( columnName, name )
const results = await query.count()
if( results === 0 ) {
// no response.success needed
// https://github.com/parse-community/parse-server/blob/alpha/3.0.0.md
} else {
throw 'Is not unique';
}
}
} )
```
Upvotes: 0
|
2018/03/18
| 682 | 2,235 |
<issue_start>username_0: I have a search engine that does the following things:
Read an input value and encode it using js, then redirect.
```
//read and save into `query` var
window.location.href = "/search/" + encodeURIComponent(query);
```
So if user enters
>
> What is the meaning of & sign ?
>
>
>
The ulrl can't end up like this;
```
expample.com/search/What%20is%the%meaning%20of%20&this%20sign?
```
And instead get:
```
expample.com/search/What%20is%the%meaning%20of%20&26this%20sign%3F
```
Now when I dump the $\_GET['parameters'] i get
```
string() "search/What is the meaning of "
```
I expect to get:
```
What is the meaning of & sign ?
```
I have tried:
```
$val = urldecode($_GET['parameters']);
```
But I have had no luck, Maybe I should change the way javascript encodes the url, what are your suggestions?<issue_comment>username_1: Just do
on client-side
>
> window.location.href = "/search/" + query;
>
>
>
and on server-side
>
> $val = urldecode($\_GET['parameters']);
>
>
>
Upvotes: -1 <issue_comment>username_2: You've mentioned that you're calling the following to obtain the value of the user's query:
```
$val = urldecode($_GET['parameters']);
```
This implies that a URL calling your PHP page would have a shape similar to the following:
```
http://foo.bar/?parameters=
```
The important thing to include in the URL is `?`; when a URL is parsed, the `?` signals that whatever comes afterward is a URL-encoded query.
Thus, in your javascript:
```
window.location.href = "/search/?parameters=" + encodeURIComponent(query);
```
Then your existing code should work.
Upvotes: 0 <issue_comment>username_3: PHP decodes URL paramaters automatically into the [`$_GET` superglobal](http://php.net/manual/en/reserved.variables.get.php) as long as you're using the standard [query string](https://en.wikipedia.org/wiki/Query_string) syntax. If you use your own syntax, you have to roll your own code (you already have custom code in the input form).
The raw URL can be fetched from `$_SERVER['REQUEST_URI']` and parsed with the text manipulation tool of your choice. It's worth noting that this isn't an uncommon set up (many PHP frameworks do things this way).
Upvotes: 1
|
2018/03/18
| 565 | 1,890 |
<issue_start>username_0: I have a nodeJS loopback application running in a Docker Container.
When I monitor the logs there is no timestamp for the logs, resulting in obvious challenges.
Is there a way to enable timestamps for these logs?
Thanks
[](https://i.stack.imgur.com/g1RWn.png)<issue_comment>username_1: As you did not share your Dockerfile or and configuration of logging in loopback I create a simple app using loopback-cli and tested it. Everything working fine.
create hello work as i worked on express not worked on loopback but follow this article
<http://loopback.io/getting-started/>
Here is my docker file
```
FROM goabode/nodejs
VOLUME [ "/opt/nodejs" ]
COPY . /opt/nodejs
WORKDIR /opt/nodejs
EXPOSE 3000
CMD [ "node" , "." ]
```
I add winston in `server/boot/root.js` for testing
```
'use strict';
module.exports = function(server) {
var winston = require('winston');
// Install a `/` route that returns server status
var router = server.loopback.Router();
router.get('/', server.loopback.status());
router.get('/ping', function(req, res) {
winston.log('info', 'This is a log event', {timestamp: Date.now(), pid: process.pid});
winston.info('This is another log event', {timestamp: Date.now(), pid: process.pid});
res.send('pong');
});
server.use(router);
};
```
Follow this article
<https://www.loggly.com/ultimate-guide/node-logging-basics/>
[](https://i.stack.imgur.com/vCQID.png)
<https://docs.strongloop.com/display/SLC/Using+logging+libraries>
Upvotes: 0 <issue_comment>username_2: Turned out I just needed to add the -t option.
So..
```
docker service logs -ft {NAME_OF_THE_SERVICE}
```
Will output the logs with the tstamp included in the output
Upvotes: 2 [selected_answer]
|
2018/03/18
| 443 | 1,743 |
<issue_start>username_0: Here's my s3 session:
```
img = request.FILES.get('image')
filename = random_string()
"""if filename exists in my s3 bucket:
create another random_string"""
session = boto3.Session(
aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
)
s3 = session.resource('s3')
s3.Bucket('my-bucket').put_object(Key='media/%s' % img.name, Body=img)
```
Basically when I upload a file via AJAX, as per the code above, I generate a random filename. However before using this filename for the new uploaded image, I want to check that it doesn't exist, to prevent overlap. If it does exist, then try another random string.
How can I do this?<issue_comment>username_1: Check your `random_string()` function. Maybe cache the results of your random strings and verify that the string you are about to return has not been used before returning.
Upvotes: 0 <issue_comment>username_2: How good is your random function? If it is possible, I would recommend using a [UUID generator](https://www.npmjs.com/package/uuid) to come up with file names, and assume no collisions (or assume eventual collisions, and follow the next steps).
Regarding how to tell if a key already exists, I would expect you can just perform a 'get-object' for the given generated key. If you get something back, then there is an object there already and a new key should be generated and tested.
Note that this approach is not an atomic operation, and would leave a window for the key to be generated elsewhere (assuming multiple concurrent executions is possible) and placed into the S3 bucket between check and put. It is because of this possibility that I recommend a UUID generator in the first place.
Upvotes: 1
|
2018/03/18
| 526 | 1,981 |
<issue_start>username_0: I've followed the Phabricator [Configuration Guide](https://secure.phabricator.com/book/phabricator/article/configuration_guide/), and after installing all dependencies, I'm facing the following message:
```
Request parameter '__path__' is not set. Your rewrite rules are not configured correctly.
```
This message is shown when I try to access `www.cleverbit.com.br/phabricator/webroot/`
I have a `apache2.conf` file configured just the way the docs suggested:
```
# Change this to the domain which points to your host.
ServerName cleverbit.com.br
# Change this to the path where you put 'phabricator' when you checked it
# out from GitHub when following the Installation Guide.
#
# Make sure you include "/webroot" at the end!
DocumentRoot /var/www/html/phabricator/webroot
RewriteEngine on
RewriteRule ^(.\*)$ /index.php?\_\_path\_\_=$1 [B,L,QSA]
Options FollowSymLinks
AllowOverride All
Require all denied
AllowOverride All
Require all granted
Require all granted
AllowOverride All
```
---
What's exactly wrong with my apache rewrite configuration?<issue_comment>username_1: Apache does not have inheritance inside Directory statements, so you also need the `AllowOverride All` line inside the Directory block for `/var/www/html/phabricator/webroot`.
To avoid confusion, I would get rid of the `/var/www` Directory block, unless you have another VirtualHost that uses it (in which case, you might want to move Phabricator out from that directory to avoid accidentally creating back doors).
Upvotes: 2 [selected_answer]<issue_comment>username_2: Setting the following rewrite rules fixed this for me
```
RewriteEngine on
RewriteRule ^/rsrc/(.*) - [L,QSA]
RewriteRule ^/favicon.ico - [L,QSA]
RewriteRule ^(.*)$ /index.php?__path__=$1 [B,L,QSA]
```
thanks to <https://gist.github.com/sparrc/b4eff48a3e7af8411fc1>
Upvotes: 0
|
2018/03/18
| 306 | 1,199 |
<issue_start>username_0: Is it possible to use git as a command line inside Visual Studio, like you would in **VS Code** or **Intellij**? Every online info on it suggests that I right-click the repos, then select open command line. However, that's opening the command line outside Visual Studio.
Thanks for helping<issue_comment>username_1: >
> The Visual Studio team is prioritizing other suggestions and
> closing it at this time.
>
>
> If you would like us to reconsider this, please create a new
> suggestion. Again, thank you for sharing with us!
>
>
> <NAME>man Program Manager, Visual Studio
>
>
>
It is not possible with Visual Studio, you can see the feedback from **[`here`](https://visualstudio.uservoice.com/forums/121579-visual-studio-ide/suggestions/4509086-visual-studio-command-prompt-window-console)**
If you really want to run command inside visual studio you can try using package manager console From
**Tools -> Nuget Package Manager -> Package Manager Console**
Upvotes: 2 <issue_comment>username_2: Directly type git commands in Package Manager Console would be straight forward. Try it out in **View -> Other Windows -> Package Manager Console**.
Upvotes: -1
|
2018/03/18
| 592 | 2,060 |
<issue_start>username_0: According to [Retrieving a document](https://www.elastic.co/guide/en/elasticsearch/guide/current/get-doc.html) documentation
```
GET /website/blog/123/_source
```
would directly return the document stored inside the `_source` field.
I'm currently using Node JS's express framework. How should I implement this in my code?
```
esClient.search({
index: "myIndex",
type: "myType",
body: {
"query": {
"match_all": {}
},
"size": 3,
"from": 1
}
}).then(function (resp) {
var result = resp.hits.hits;
res.status(200).send({data: {recommendations: result, showItemFrom: showItemFrom}})
}, function (err) {
console.trace(err.message)
res.status(500).send({data: err.message})
})
```
I'm getting the response this way...
```
[
"_source":{
{
"id": 1,
"title": "Test"
}
}
]
```
However, I want it this way...
```
[
{
id:1,
title:"Test"
}
]
```<issue_comment>username_1: After
```
index:"myIndex"
```
Add:
```
source:true
```
Upvotes: 0 <issue_comment>username_2: You need to call the [`getSource()` function](https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/api-reference.html#api-getsource), like this:
```
esClient.getSource({
index: "website",
type: "blog",
id: "123"
}).then(function (source) {
// do something with source
}, function (err) {
// error happened
})
```
Upvotes: 0 <issue_comment>username_3: I don't think the Elasticsearch API has a method to do that for searches, the one that username_2 mentioned works, but it is only usable to GET documents directly through its id.
But you can map the result using the Javascript [Array#map()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map) method:
```
var result = resp.hits.hits.map(hit => hit._source);
```
Upvotes: 3 [selected_answer]
|
2018/03/18
| 354 | 1,140 |
<issue_start>username_0: HTML
```
id="fees-inp">
Admission Fees()
```
Javascript
```
$(".dep-adm-inp").hide();
```
the div is hidden when page loads but if the php variable is not empty then i want to show the div when.<issue_comment>username_1: `.hide()` method sets `display:none;` on your element.
You can override it by setting `display: block!important;` (or whatever your `display` property is) in CSS:
```js
setTimeout(() => {
$('.item').hide();
}, 2000);
```
```css
.container {
display: flex;
}
.item {
width: 200px;
height: 100px;
background-color: darkgoldenrod;
margin: 10px;
}
/* This makes .item visible even after .hide() */
.item-visible {
display: block!important;
}
```
```html
Will be visible
Will be hidden
```
Upvotes: 1 <issue_comment>username_2: set the class variable `dep-adm-inp` as hidden by default in css. If the variable `$adm_f` is not empty, set `style='display:block;'`. With this logic you don't need to call the statement `$(".dep-adm-inp").hide();` in your javascript. This way the div will be shown only if the variable `$adm_f` is not empty.
Upvotes: 2
|
2018/03/18
| 599 | 2,340 |
<issue_start>username_0: I have the following class:
**EmailNotification**
```
namespace App\Component\Notification\RealTimeNotification;
use Symfony\Bridge\Twig\TwigEngine;
use Symfony\Bundle\FrameworkBundle\Templating\EngineInterface;
use App\Component\Notification\NotificationInterface;
class EmailNotification implements NotificationInterface
{
private $logNotification;
public function __construct(LogNotification $logNotification, \Swift_Mailer $mailer, EngineInterface $twigEngine)
{
$this->logNotification = $logNotification;
}
public function send(array $options): void
{
$this->logNotification->send($options);
dump('Sent to email');
}
}
```
I have the following service definition on my yml:
```
app.email_notification:
class: App\Component\Notification\RealTimeNotification\EmailNotification
decorates: app.log_notification
decoration_inner_name: app.log_notification.inner
arguments: ['@app.log_notification.inner', '@mailer', '@templating']
```
However, when i tried to run my app it throws an Exception saying:
>
> Cannot autowire service
> "App\Component\Notification\RealTimeNotification\EmailNotification":
> argument "$twigEngine" of method "\_\_construct()" has type
> "Symfony\Bundle\FrameworkBundle\Templating\EngineInterface" but this
> class was not found.
>
>
>
Why is that so?
Thanks!<issue_comment>username_1: Most likely you don't have Templating included in your project, in Symfony 4 you have to require it explicitly:
```
composer require symfony/templating
```
Upvotes: 5 [selected_answer]<issue_comment>username_2: You have to install symfony/templating
```
composer require symfony/templating
```
change a little bit config/packages/framework.yaml
```
framework:
templating:
engines:
- twig
```
Upvotes: 5 <issue_comment>username_3: I managed to do it with Twig Environment and HTTP Response
```
php
namespace App\Controller;
use Twig\Environment;
use Symfony\Component\HttpFoundation\Response;
class MyClass
{
private $twig;
public function __construct(Environment $twig)
{
$this-twig = $twig;
}
public function renderTemplateAction($msg)
{
return new Response($this->twig->render('myTemplate.html.twig'));
}
}
```
Upvotes: 5
|
2018/03/18
| 1,220 | 2,646 |
<issue_start>username_0: Given a list like this, where first column is the id and second is a string,
```
[ [2, ["00_01_02"]],
[1, ["00_03_04"]],
[3, ["00_03_04"]],
[6, ["00_03_04"]],
[4, ["01_02_03"]],
[5, ["01_02_03"]],
]
```
As you can see there are adjacent elements that are the same. For example, the id 1,3 and 6 have the same string so I would like to group them up into another list. Same goes for id 5 and 4. We can also assume that the list is in sorted order by the string.
I would like to solve this problem in
O(NC) if possible, where N is the number of element in the list, C is the number of characters in the string.
Example output would be something like,
```
[ [[1,3,6], ["00_03_04"]],
[[4,5] , ["01_02_03"]] ]
```<issue_comment>username_1: You should definitely use [collectins.defaultdict](https://docs.python.org/3/library/collections.html#collections.defaultdict):
```
from collections import defaultdict
l = [ [2, ["00_01_02"]],
[1, ["00_03_04"]],
[3, ["00_03_04"]],
[6, ["00_03_04"]],
[4, ["01_02_03"]],
[5, ["01_02_03"]],
]
for v, k in l:
new_d[k[0]].append(v)
new_d
Out[102]: defaultdict(list, {'00_01_02': [2], '00_03_04': [1, 3, 6], '01_02_03': [4, 5]})
```
If you want the similar output as you requested.
```
[[v,[k]] for k,v in new_d.items() if len(v) > 1]
Out[118]: [[[1, 3, 6], ['00_03_04']], [[4, 5], ['01_02_03']]]
```
Upvotes: 0 <issue_comment>username_2: Use `itertools` for efficiency -
```
a = [ [2, ["00_01_02"]],
[1, ["00_03_04"]],
[3, ["00_03_04"]],
[6, ["00_03_04"]],
[4, ["01_02_03"]],
[5, ["01_02_03"]],
[7, ["00_03_04"]],
]
from itertools import groupby
from operator import itemgetter
print([ [[ g[0] for g in grp], key] for key, grp in groupby(a, key=itemgetter(1))])
```
**Output**
```
[[[2], ['00_01_02']], [[1, 3, 6], ['00_03_04']], [[4, 5], ['01_02_03']], [[7], ['00_03_04']]]
```
Upvotes: 1 <issue_comment>username_3: You can try in one line:
```
data=[ [2, ["00_01_02"]],
[1, ["00_03_04"]],
[3, ["00_03_04"]],
[6, ["00_03_04"]],
[4, ["01_02_03"]],
[5, ["01_02_03"]],
]
import itertools
print([list(j) for i,j in itertools.groupby(data,key=lambda x:x[1][0])])
```
output:
```
[[[2, ['00_01_02']]], [[1, ['00_03_04']], [3, ['00_03_04']], [6, ['00_03_04']]], [[4, ['01_02_03']], [5, ['01_02_03']]]]
```
if you don't want to use any import then:
```
similar={}
for j in data:
if j[1][0] not in similar:
similar[j[1][0]]=[j[0]]
else:
similar[j[1][0]].append(j[0])
print(similar)
```
output:
```
{'01_02_03': [4, 5], '00_01_02': [2], '00_03_04': [1, 3, 6]}
```
Upvotes: 0
|
2018/03/18
| 490 | 1,564 |
<issue_start>username_0: I am trying to parse html string using a regular expression.
Full html is loaded in a string variable and i know the id of the element. How to get the name of that particular element
in the below example id is `field-options-Real-fc` and the expected result is `f4186d62184e277e2968ece68da25a860`
Can anyone help me with the regular expression to match the name ?
value property is also unique and the html format is remains same.
```
- Real FC
```<issue_comment>username_1: You can use a Positive Lookahead to find the id, then use a capturing group (`$1`) to pull the ID itself, terminating the group at the next double quote.
```
(?=id=")id="(.*)"\s
```
[DEMO](https://regex101.com/r/aS4tDr/1/)
========================================
---
Edit: This still works with your [latest revision](https://stackoverflow.com/revisions/49344763/4).
[DEMO (Revised for Rev#4)](https://regex101.com/r/aS4tDr/2)
===========================================================
Upvotes: 0 <issue_comment>username_2: It's better to use XML parsers like BeautifulSoup (python)
```
import BeautifulSoup
soup = BeautifulSoup.BeautifulSoup(your_html_string)
elem = soup.find(id="field-options-Real-fc")
name = elem['name']
```
Upvotes: 1 <issue_comment>username_2: To cover all (two) possible orders or attrs, use lookahead:
```
/\<(?=(?:[^>]|"[^"]*")*id="field-options-Real-fc")(?:[^>]|"[^"]*")*name="([^"]*)"/
```
`(?:[^>]|"[^"]*")` is needed here to disallow a '>' character unless quoted.
<https://regex101.com/r/aS4tDr/3>
Upvotes: 0
|
2018/03/18
| 423 | 1,409 |
<issue_start>username_0: This algorithm is giving me trouble, I cannot find any sources online about dealing with the while loop that is also affected by the outer for loop. Is there a complicated process, or can you look from the loop that it is simply (outer loop = n , inner loop = %%%%) ? Any help is appreciated thank you.<issue_comment>username_1: You can use a Positive Lookahead to find the id, then use a capturing group (`$1`) to pull the ID itself, terminating the group at the next double quote.
```
(?=id=")id="(.*)"\s
```
[DEMO](https://regex101.com/r/aS4tDr/1/)
========================================
---
Edit: This still works with your [latest revision](https://stackoverflow.com/revisions/49344763/4).
[DEMO (Revised for Rev#4)](https://regex101.com/r/aS4tDr/2)
===========================================================
Upvotes: 0 <issue_comment>username_2: It's better to use XML parsers like BeautifulSoup (python)
```
import BeautifulSoup
soup = BeautifulSoup.BeautifulSoup(your_html_string)
elem = soup.find(id="field-options-Real-fc")
name = elem['name']
```
Upvotes: 1 <issue_comment>username_2: To cover all (two) possible orders or attrs, use lookahead:
```
/\<(?=(?:[^>]|"[^"]*")*id="field-options-Real-fc")(?:[^>]|"[^"]*")*name="([^"]*)"/
```
`(?:[^>]|"[^"]*")` is needed here to disallow a '>' character unless quoted.
<https://regex101.com/r/aS4tDr/3>
Upvotes: 0
|
2018/03/18
| 468 | 1,470 |
<issue_start>username_0: I'm looking to obfuscate some text by making it completely illegible.
While this is very easy to do in handwriting, I need it to be done on a webpage in Verdana.
I was thinking that I could possibly make a Verdana-esque font that has the top and bottom halves of many letters switched, or I could overlap lines of text such that it becomes an unreadable mess.
An alternative would be to use DeepStyle images in place of text.
What's the best way to go about this?<issue_comment>username_1: This q might be closed, but wth, here's something, with an idea from [here](https://css-tricks.com/snippets/css/blurry-text/) (blur), plus my own thrown in.
```css
p {
font-family: verdana;
text-decoration: line-through;
}
.blur {
color: transparent;
text-shadow: 0 0 7px rgba(0,0,0,0.5);
}
```
```html
Now that we know who you are, I know who I am. I'm not a mistake!
It all makes sense! In a comic, you know how you can tell who the
arch-villain's going to be? He's the exact opposite of the hero.
And most times they're friends, like you and me! I should've known
way back when... You know why, David? Because of the kids. They
called me Mr Glass.
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: Or perhaps a `text-shadow` solution?
```css
#mangle {
text-shadow: 0.1em 0.1em,
-0.1em 0.1em,
0.1em -0.1em,
-0.1em -0.1em;
}
```
```html
Sufficiently obfuscated?
```
Upvotes: 2
|
2018/03/18
| 1,093 | 4,242 |
<issue_start>username_0: I already searched for answers but I couldn't find any answer that could solve my problem.
I have 2 tables: `phones` and `phone_types`. The first one has a foreign key associated with the phone\_types primary key. I want to show on view the name of phone\_type through phone object. Something like `$phone->phone_type->name`.
My code generates this error: `Trying to get property of non-object (View: /home/ablono/dev/MisServices/resources/views/painel/phones/home.blade.php)`
**My code is listed below.**
**phone\_types miration table:**
```
use Illuminate\Support\Facades\Schema;
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Database\Migrations\Migration;
class CreatePhoneTypesTable extends Migration
{
/**
* Run the migrations.
*
* @return void
*/
public function up()
{
Schema::create('phone_types', function (Blueprint $table) {
$table->increments('id');
$table->string('name', 20);
$table->string('description', 30);
$table->timestamps();
});
}
/**
* Reverse the migrations.
*
* @return void
*/
public function down()
{
Schema::dropIfExists('phone_types');
}
```
}
```
**phones migration table:**
```
php
use Illuminate\Support\Facades\Schema;
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Database\Migrations\Migration;
class CreatePhonesTable extends Migration
{
/**
* Run the migrations.
*
* @return void
*/
public function up()
{
Schema::create('phones', function (Blueprint $table) {
$table-increments('id');
$table->string('ddd')->default('13');
$table->string('number', 20);
$table->integer('phone_type_id')->unsigned();
$table->foreign('phone_type_id')
->references('id')
->on('phone_types')
->onDelete('cascade');
$table->timestamps();
});
Schema::create('phone_user', function (Blueprint $table) {
$table->increments('id');
$table->integer('phone_id')->unsigned();
$table->integer('user_id')->unsigned();
$table->foreign('phone_id')
->references('id')
->on('phones')
->onDelete('cascade');
$table->foreign('user_id')
->references('id')
->on('users')
->onDelete('cascade');
$table->timestamps();
});
}
/**
* Reverse the migrations.
*
* @return void
*/
public function down()
{
Schema::dropIfExists('phone_user');
Schema::dropIfExists('phones');
}
}
```
**I didn't code anything on PhoneType model, but here is the code:**
```
php
namespace App;
use Illuminate\Database\Eloquent\Model;
class PhoneType extends Model
{
//
}
</code
```
**Phone model:**
```
php
namespace App;
use Illuminate\Database\Eloquent\Model;
class Phone extends Model
{
public function users()
{
return $this-belongsToMany(\App\User::class);
}
public function phoneType()
{
return $this->hasMany(\App\PhoneType::class);
}
}
```
**Method that is sending that to view:**
```
public function index()
{
$phones = Phone::all();
return view('painel.phones.home', compact('phones'));
}
```
**Part of view that is listing data:**
```
ID | DDD | Número | Tipo de Telefone | Ações |
@foreach($phones as $phone)
| {{ $phone->id }} | {{ $phone->ddd }} | {{ $phone->number }} | {{ $phone->phone\_type\_id->name }} |
|
@endforeach
```<issue_comment>username_1: Change your relationship to `belongsTo()`:
```
public function phoneType()
{
return $this->belongsTo(\App\PhoneType::class, 'phone_type_id');
}
```
And in `view`
```
{{ $phone->phone\_type->name }} |
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Please check your eloquent model relationship. And accessing the eloquent property is like .
$main-model->$sub-model(method name of model which you want access)->property
Note: When you access method please convert method name in to **snake case**
Upvotes: 0
|
2018/03/18
| 371 | 1,382 |
<issue_start>username_0: I have a loading screen that takes about 15-30 seconds depending on the data that's loaded. It loads about 50 items and displays on the page:
`Loading item x`
It uses an observable/subscription for each data call made to the DB. Upon receiving the data, the subscription fires off and adds it to an HTML string:
```
sync() {
this.syncStatus = "Starting sync"
this.syncService.sync().subscribe((status: string) => {
this.syncStatus += "" + status + '';
}, (error: string) => {
console.log(error);
}, () => {
this.Redirect();
});
}
```
As of now, it simply shows the list and it cuts off the list display because it gets so long (again, 50+ items, sometimes hundreds). I was wondering, how do I show each individual item to the page for 5s then hide it?<issue_comment>username_1: Change your relationship to `belongsTo()`:
```
public function phoneType()
{
return $this->belongsTo(\App\PhoneType::class, 'phone_type_id');
}
```
And in `view`
```
{{ $phone->phone\_type->name }} |
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Please check your eloquent model relationship. And accessing the eloquent property is like .
$main-model->$sub-model(method name of model which you want access)->property
Note: When you access method please convert method name in to **snake case**
Upvotes: 0
|
2018/03/18
| 554 | 2,138 |
<issue_start>username_0: I've been attempting to reassign a particular vector element to a new value and received a error from the compiler, and I'm not sure I understand it. So I believed that you could reassign a single vector element in the same way you could reassign an array's element.
```
std::vector myVector[10];
myVector[5] = 6;
```
Or you could alternatively use the built in '.at' to access the vector with bounds checking. When I was writing some trivial code just to understand some concepts better I ran across a peculiar situation.
```
int main()
{
std::vector test[10];
test[3] = 5;
if (test[3] != 6)
{
std::cout << "It works!" << std::endl;
}
return 0;
}
```
Now this piece of code flags an error saying that the assignment operator '=' and the logical operator '!=' doesn't match based on these operands. Now if I use the arrow operator '->' the code works just fine. Which is good, but I thought, perhaps mistakenly, that the arrow operator was used when dereferencing a pointer-to-object. I attempted to google these results, but perhaps due to the very rudimentary nature of it, I couldn't find much on the topic. Although, I would like to mention on a few sites with "c++ tutorials" I did see that they used the assignment operator without dereferencing the vector. Now this happens in both Visual Studios 2017 as well as the most recent version of Code::Blocks. Was I wrong? Do you actually need to utilize the arrow operator? Or am I missing something even more basic?<issue_comment>username_1: You created an array of 10 vectors, not a vector of 10 elements. A vector is is ultimately a class type, so you need to initialize it via a constructor:
```
std::vector test(10);
```
The way you did it originally, meant you tried to assign the value 5 for the vector at index 3. Vectors don't support being assigned numbers, so that's what the error is about.
Upvotes: 3 [selected_answer]<issue_comment>username_2: You're declaring an array of vectors rather than one vector of some initial length. Use the following instead of your declaration:
```
std::vector myVector(10);
```
Upvotes: 2
|
2018/03/18
| 365 | 1,359 |
<issue_start>username_0: I'm getting this error on my site after I tried to convert to mysqli on line 26 (*$rs = mysqli\_query($link,MySQLi) or die(mysqli\_connect\_error());*)
My code is like this
//Get configuration control record from database
```
$MySQLi = "SELECT configValLong
FROM storeadmin
WHERE configVar = 'controlRec'
AND adminType = 'C' ";
$rs = mysqli_query($link,MySQLi) or die(mysqli_connect_error());
$totalRows = mysqli_num_rows($rs);
if ($totalRows!=0){
$rows = mysqli_fetch_array($rs);
$configArr = trim($rows["configValLong"]);
if (strlen($configArr) > 0 ){
$configArr = explode("*|*",$configArr);
}
```<issue_comment>username_1: You created an array of 10 vectors, not a vector of 10 elements. A vector is is ultimately a class type, so you need to initialize it via a constructor:
```
std::vector test(10);
```
The way you did it originally, meant you tried to assign the value 5 for the vector at index 3. Vectors don't support being assigned numbers, so that's what the error is about.
Upvotes: 3 [selected_answer]<issue_comment>username_2: You're declaring an array of vectors rather than one vector of some initial length. Use the following instead of your declaration:
```
std::vector myVector(10);
```
Upvotes: 2
|
2018/03/18
| 848 | 2,966 |
<issue_start>username_0: I'd like to use Polly to do the following: Attempt a request with a very short timeout. If it fails, retry with a longer timeout.
I see that `Retry` can access `retryCount` like this:
```
Policy
.Handle()
.Retry(3, (exception, retryCount, context) =>
{
// do something
});
```
And I see that `Timeout` can specify an `int` or `TimeSpan`, like this:
```
Policy.Timeout(TimeSpan.FromMilliseconds(2500))
```
I even see that you can pass a function in to the timeout, like this:
```
Policy.Timeout(() => myTimeoutProvider)) // Func myTimeoutProvider
```
The `Func` option seems the most promising, but where could it access the retry count? It's tempting to keep state outside of the policy, but that's dangerous if I ever want to share the policy in a thread safe manner.
Any advice?<issue_comment>username_1: From: <https://github.com/App-vNext/Polly/wiki/Timeout>
```c#
int retryCount_ = 0;
Func myTimeoutProvider = () =>
TimeSpan.FromMilliseconds(25\*retryCount\_);
// Configure variable timeout via a func provider.
Policy
.Timeout(() => myTimeoutProvider)) // Func myTimeoutProvider
.Retry(3, (exception, retryCount, context) =>
{
retryCount\_ = retryCount;
// do something
})
```
or set the timeout similar to the way shown in:
```c#
Policy
.Handle()
.WaitAndRetry(5, retryAttempt =>
TimeSpan.FromSeconds(Math.Pow(2, retryAttempt))
);
```
**UPDATE**
You can also hook onto `onTimeoutAsync` callback and keep increasing your local variable that `myTimeoutProvider` relies on.
Upvotes: 1 <issue_comment>username_2: You can use Polly `Context` to pass state data between different policies involved in an execution. A unique instance of Polly `Context` flows with every Polly execution, so this is entirely thread-safe.
More detail on this technique [in this blog post](http://www.thepollyproject.org/2017/05/04/putting-the-context-into-polly/).
For example:
```
const string RetryCountKey = "RetryCount";
RetryPolicy retryStoringRetryCount = Policy
.Handle()
.Retry(3, (exception, retryCount, context) =>
{
Console.WriteLine("Storing retry count of " + retryCount + " in execution context.");
context[RetryCountKey] = retryCount;
});
TimeoutPolicy timeoutBasedOnRetryCount = Policy
.Timeout(context =>
{
int tryCount;
try
{
tryCount = (int) context[RetryCountKey];
}
catch
{
tryCount = 0; // choose your own default for when it is not set; also applies to first try, before any retries
}
int timeoutMs = 25 \* (tryCount + 1);
Console.WriteLine("Obtained retry count of " + tryCount + " from context, thus timeout is " + timeoutMs + " ms.");
return TimeSpan.FromMilliseconds(timeoutMs);
});
PolicyWrap policiesTogether = retryStoringRetryCount.Wrap(timeoutBasedOnRetryCount);
```
(*Note*: Of course this ^ can be made more concise. Set out here for maximum clarity.)
Here is a [live dotnetfiddle example](https://dotnetfiddle.net/n0EOuo).
Upvotes: 4 [selected_answer]
|
2018/03/18
| 1,062 | 3,048 |
<issue_start>username_0: I have a list containing ID's of students:
```
ID = [1,2,3]
```
and i have a table containing student names and their hobby:
```
student = [['Jack','Fishing'],['Alice','Reading'],['Mun','Football']]
```
I want to concatenate the ID to the first position of each sublist within the student list where i obtain:
```
[[1,'Jack','Fishing'],[2,'Alice','Reading'],[3,'Mun','Football']]
```
I tried:
```
for i in range(len(student)):
student = ID[i] + student[i]
```
but I'm getting an error saying unsupported operand type.<issue_comment>username_1: You can use `zip` with list comprehension
**Ex:**
```
ID = [1,2,3]
l = [['Jack','Fishing'],['Alice','Reading'],['Mun','Football']]
newList = [[i[0]]+i[1] for i in zip(ID, l)]
print(newList)
```
**Output:**
```
[[1, 'Jack', 'Fishing'], [2, 'Alice', 'Reading'], [3, 'Mun', 'Football']]
```
Upvotes: 2 <issue_comment>username_2: @Clink- The element you get from `range(len(student))` is `int` type and trying merge with list. You should merge iterable types. That's why you are getting this error.
*TypeError: unsupported operand type(s) for +: 'int' and 'list'*
You can try solutions given by username_1 or can follow below approach.
```
[std.insert(0, ID[idx]) for idx, std in enumerate(student)]
```
**Output**
[[1, 'Jack', 'Fishing'], [2, 'Alice', 'Reading'], [3, 'Mun', 'Football']]
Upvotes: 0 <issue_comment>username_3: @clink when you write `ID[i]`, it picks an element from the list `ID`. All the elements in the list `ID` are of type `int`. Also, all the elements in the `students` list are of type `list`. Hence when you use `+` operator between an `int` and `list` types you get the error
```
TypeError: unsupported operand type(s) for +: 'int' and 'list'
```
What you need to do is to put the `int` into a new `list` to get the results you are seeking. Below is the modified code:
```
ID = [1,2,3]
student = [['Jack','Fishing'],['Alice','Reading'],['Mun','Football']]
for i in range(len(student)):
student[i] = [ID[i]] + student[i]
```
**Output**
```
[[1, 'Jack', 'Fishing'], [2, 'Alice', 'Reading'], [3, 'Mun', 'Football']]
```
Pay attention to a single change: `ID[i]` was changed to `[ID[i]]`.
Upvotes: 1 <issue_comment>username_4: just a little change:
```
ID = [1,2,3]
student = [['Jack','Fishing'],['Alice','Reading'],['Mun','Football']]
for i in range(len(student)):
student[i].insert(0,ID[i])
print(student)
```
output:
```
[[1, 'Jack', 'Fishing'], [2, 'Alice', 'Reading'], [3, 'Mun', 'Football']]
```
Upvotes: 0 <issue_comment>username_5: You don't need loop or anything just try:
```
ID = [1,2,3]
student = [['Jack','Fishing'],['Alice','Reading'],['Mun','Football']]
print(list(zip(ID,student)))
```
output:
```
[(1, ['Jack', 'Fishing']), (2, ['Alice', 'Reading']), (3, ['Mun', 'Football'])]
```
If you don't want nested list then:
```
print(list(map(lambda x:[x[0],*x[1]],zip(ID,student))))
```
output:
```
[[1, 'Jack', 'Fishing'], [2, 'Alice', 'Reading'], [3, 'Mun', 'Football']]
```
Upvotes: 0
|
2018/03/18
| 691 | 2,698 |
<issue_start>username_0: Research a lot about this issue and found the same question, which doesn't solve my problem. so starting a new question.
[Xamarin Android: Android.Views.InflateException - Error when loading Layout](https://stackoverflow.com/questions/48460907/xamarin-android-android-views-inflateexception-error-when-loading-layout)
Getting **Xamairin forms : Android.Views.InflateException: Timeout exceeded getting exception** details when loading layout.
Main.axml
```
xml version="1.0" encoding="utf-8"?
```
I have not used layout:width and layout:height properties in my axml and only using an icon having 4kb size.
Please suggest a solution for this issue, thanks in advance...<issue_comment>username_1: >
> Getting Xamairin forms : Android.Views.InflateException: Timeout exceeded getting exception details when loading layout.
>
>
>
There are two things needs to be confirmed before using `TextInputLayout`:
1. Please make sure, `Xamarin.Android.Support.Design` lib is correctly referenced in your `Xamarin.Android` project. If you are using `Xamarin.Forms`, it should be referenced by default.
2. Please make sure the Activity is of type `Android.Support.V7.App.AppCompatActivity`.
If none of the above things help, please provide a basic demo that can reproduce the problem.
Update:
Upon testing on the project you shared. I found you need to use a `Theme.AppCompat` theme for your activity, you can do that by following below steps:
1. Create a `styles.xml` file under `Resources\values\` folder with below codes:
```
xml version="1.0" encoding="utf-8" ?
```
2. In `MainActivity.cs` define the `Activity` attribute to leverage `AppTheme` like this:
```
[Activity(Label = "DinexFeedback", MainLauncher = true, Icon = "@drawable/icon",Theme ="@style/AppTheme")]
public class MainActivity : Android.Support.V7.App.AppCompatActivity
{
...
```
Then the project will run fine.
Upvotes: 2 [selected_answer]<issue_comment>username_2: Unhandled Exception: Android.Views.InflateException:
Make sure the necessary nuget package installed in visual studio.
I used card view in my project, i got this issue becoz i didn't install Xamarin.Android.Support.cardview.
so we should install all support nuget package in order to support all widgets here i use card view.
click on project -> Manage Nuget-> check out the installed pakages, if the package were not installed just search the particular package and then download it.
**updated on 10/12/2018**
check your layout whether all the layout fields are started with capital letter. for ex : In android we used in small letter but In c# we should use otherwise it throws the inflate view exception.
Upvotes: 0
|
2018/03/18
| 410 | 1,434 |
<issue_start>username_0: Why do I get `SyntaxError: missing ) after argument list` with the following code?:
```
$('#login-form').on('submit', function(event){
event.preventDefault();
console.log("form submitted!"); // sanity check
$('#table').load('http://127.0.0.1:8000/results',function(){
$('#go_back').remove();
}});
```<issue_comment>username_1: You forgot the ")" at the last line . It should be:
```
$('#login-form').on('submit', function(event){
event.preventDefault();
console.log("form submitted!"); // sanity check
$('#table').load('http://127.0.0.1:8000/results',function(){
$('#go_back').remove();
}); // <-- Add it Here!
});
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Line up your closing brackets and parentheses in the future to minimize the occurrence of these types of mistakes. You are missing what it tells you.
```
$('#login-form').on('submit', function(event){
event.preventDefault();
console.log("form submitted!"); // sanity check
$('#table').load('http://127.0.0.1:8000/results',function(){
$('#go_back').remove();
})
})
```
Upvotes: 1 <issue_comment>username_3: ```
$('#login-form').on('submit', function(event){
event.preventDefault();
console.log("form submitted!"); // sanity check
$('#table').load('http://127.0.0.1:8000/results',function(){
$('#go_back').remove();
})});
```
Upvotes: 0
|
2018/03/18
| 646 | 2,551 |
<issue_start>username_0: For years, I understood that when tables are joined, one row from primary table is joined to a row in target table after applying conditions I.e the query results will <= rows in the primary table. But I have seen where one row in primary table can be joined multiple times of the conditions allow. e.g the query below's count function would not work without duplicate rows form primary table
```
SELECT node.name, (COUNT(parent.name) - 1) AS depth
FROM nested_category AS node,
nested_category AS parent
WHERE node.lft BETWEEN parent.lft AND parent.rgt
GROUP BY node.name
ORDER BY node.lft;
```
Which produces this result
```
+----------------------+-------+
| name | depth |
+----------------------+-------+
| ELECTRONICS | 0 |
| TELEVISIONS | 1 |
| TUBE | 2 |
| LCD | 2 |
| PLASMA | 2 |
| PORTABLE ELECTRONICS | 1 |
| MP3 PLAYERS | 2 |
| FLASH | 3 |
| CD PLAYERS | 2 |
| 2 WAY RADIOS | 2 |
+----------------------+-------+
```
I know I may be asking something really basic, but how exactly are rows joined together in the most simplest joins possible, does mysql take steps like when regex engine is executing pattern against string?<issue_comment>username_1: You forgot the ")" at the last line . It should be:
```
$('#login-form').on('submit', function(event){
event.preventDefault();
console.log("form submitted!"); // sanity check
$('#table').load('http://127.0.0.1:8000/results',function(){
$('#go_back').remove();
}); // <-- Add it Here!
});
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Line up your closing brackets and parentheses in the future to minimize the occurrence of these types of mistakes. You are missing what it tells you.
```
$('#login-form').on('submit', function(event){
event.preventDefault();
console.log("form submitted!"); // sanity check
$('#table').load('http://127.0.0.1:8000/results',function(){
$('#go_back').remove();
})
})
```
Upvotes: 1 <issue_comment>username_3: ```
$('#login-form').on('submit', function(event){
event.preventDefault();
console.log("form submitted!"); // sanity check
$('#table').load('http://127.0.0.1:8000/results',function(){
$('#go_back').remove();
})});
```
Upvotes: 0
|
2018/03/18
| 359 | 1,322 |
<issue_start>username_0: Assuming this is my pojo:
```
public class POJOFoo{}
public class POJOFoo2{}
```
Assuming this is my method and i want to either use the same method to
display POJOFoo and POJOFoo2.
```
public void Foo(Object pojofoo/pojofoo2) {
ObservableList observableDescriptions =
FXCollections.observableArrayList(new LinkedList<>());
}
```
I'm seriously out of imagination of how can I find solution on this situation. Thanks for the help.<issue_comment>username_1: Kurt, when passing an object as a parameter, add this object type to the method, like so:
```
public void Foo(POJOFoo pojofoo){
ObservableList observableDescriptions =
FXCollections.observableArrayList(new LinkedList<>());
}
```
Upvotes: 0 <issue_comment>username_2: You can replace the datatype of the parameter `pojofoo` from `Object` to `POJOFoo` in the signature of the method.
```
public void Foo(POJOFoo pojofoo) {
ObservableList observableDescriptions =
FXCollections.observableArrayList(new LinkedList<>());
}
```
In case, you want to use `Generics` to allow the method to use any `POJO`, you could probably rewrite the function as -
```
public void Foo(T pojofoo) {
ObservableList observableDescriptions =
FXCollections.observableArrayList(new LinkedList<>());
}
```
Upvotes: 2 [selected_answer]
|
2018/03/18
| 511 | 1,972 |
<issue_start>username_0: I'm building an OData v2 service with Olingo which partly gets data from an S/4 System through a custom Gateway OData service, partly from Cloud Platform Neo and combines both. To achieve this I use the SDK for service development included in the S/4 Cloud SDK (<https://help.sap.com/viewer/p/SDK_FOR_SERVICE_DEVELOPMENT>).
Here's my problem: When trying to call my Gateway OData service from my Olingo Service it gives me an error "No error field found in JSON". After some tries I found the stack trace but it doesn't really help me either since it just says that the Metadata request failed no reason why. Here's my stack trace:
[](https://i.stack.imgur.com/VLtXe.jpg)
Can anyone tell me what might be the cause of this (credentials and URLs are double checked) and / or help me in resolving this issue?
Thanks a lot in advance!
**EDIT**
The issue seems to be connected to the destination configuration. I tried an HTTP destination instead of HTTPS and now it's working... Still I'd like to get it to work on HTTPS too.<issue_comment>username_1: Kurt, when passing an object as a parameter, add this object type to the method, like so:
```
public void Foo(POJOFoo pojofoo){
ObservableList observableDescriptions =
FXCollections.observableArrayList(new LinkedList<>());
}
```
Upvotes: 0 <issue_comment>username_2: You can replace the datatype of the parameter `pojofoo` from `Object` to `POJOFoo` in the signature of the method.
```
public void Foo(POJOFoo pojofoo) {
ObservableList observableDescriptions =
FXCollections.observableArrayList(new LinkedList<>());
}
```
In case, you want to use `Generics` to allow the method to use any `POJO`, you could probably rewrite the function as -
```
public void Foo(T pojofoo) {
ObservableList observableDescriptions =
FXCollections.observableArrayList(new LinkedList<>());
}
```
Upvotes: 2 [selected_answer]
|
2018/03/18
| 515 | 1,717 |
<issue_start>username_0: How can I modify the script below to read the values in a column, but only read the values of the specified cells: L9, L10 ... L18, L19 .... L27, L28 .... (two by two jumping eight, to the end of the column L2000) and when find the value "1" in one of those cells, then replace the value that is 4 lines up with "FALSE"?
e.g. if cell L19 is found the number 1, then replace the value of cell L15 with the word "FALSE" ... and if cell L28 is found the number 1, then replace the value of cell L24 with the word "FALSE"
```
var ss = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("ABC");
var lastrow = ss.getLastRow();
var range = ss.getRange(9, 12, lastrow - 1, 1);
var data = range.getValues();
for (var i=0; i < data.length; i++) {
if (data[i][0] == "1") {
data[i][0] = "FALSE";
}
}
range.setValues(data);
```<issue_comment>username_1: Kurt, when passing an object as a parameter, add this object type to the method, like so:
```
public void Foo(POJOFoo pojofoo){
ObservableList observableDescriptions =
FXCollections.observableArrayList(new LinkedList<>());
}
```
Upvotes: 0 <issue_comment>username_2: You can replace the datatype of the parameter `pojofoo` from `Object` to `POJOFoo` in the signature of the method.
```
public void Foo(POJOFoo pojofoo) {
ObservableList observableDescriptions =
FXCollections.observableArrayList(new LinkedList<>());
}
```
In case, you want to use `Generics` to allow the method to use any `POJO`, you could probably rewrite the function as -
```
public void Foo(T pojofoo) {
ObservableList observableDescriptions =
FXCollections.observableArrayList(new LinkedList<>());
}
```
Upvotes: 2 [selected_answer]
|
2018/03/18
| 615 | 2,088 |
<issue_start>username_0: I'm want to learn .net core application development. I'm currently using `Visual Studio 2012`.
>
> Is there a way to develop a .net core application with Visual Studio 2012 or is it
> compulsory to use `Visual Studio 2017` for .net core development ?
>
>
><issue_comment>username_1: You can (and should) use [Visual Studio Code](https://code.visualstudio.com).
Upvotes: 0 <issue_comment>username_2: According [the official docs](https://learn.microsoft.com/en-us/dotnet/core/windows-prerequisites?tabs=netcore2x), if you want to develop .NET Core apps on Windows, you have three options:
* [Command line](https://learn.microsoft.com/en-us/dotnet/core/tutorials/using-with-xplat-cli) with any editor of your choice (it can be VS 2012 as well)
* [Visual Studio 2017](https://www.visualstudio.com/downloads/) (Microsoft has no intentions to support earlier versions of VS for .Net Core development) - it also has Community Edition
* [Visual Studio Code](https://code.visualstudio.com) for easy cross-platform development, best fit for ASP.NET Core apps
Upvotes: 3 <issue_comment>username_3: **We can develop .Net core application using several applications or methods.**
1. Command Prompt
>
> We can use command prompt to develop .net core application and the
> additional thing we want is .net core SDK. You can follow [this](https://www.c-sharpcorner.com/article/building-first-console-application-using-command-prompt-in-net-core-2-0/).
>
>
>
2. Visual Studio Code
>
> You can use visual studio code for .net core development which is
> light application that can be taken for free. follow [this](https://www.c-sharpcorner.com/article/crud-operation-with-asp-net-core-mvc-using-visual-studio-code-and-ado-net/) article.
>
>
>
3. Visual Studio 2017 community version
>
> This is also a way to develop .net core application which comes with
> many facilities. after registering you can use this one also for free. Follow [this](https://www.c-sharpcorner.com/blogs/getting-started-angular-in-net-core-2).
>
>
>
Upvotes: 1 [selected_answer]
|
2018/03/18
| 760 | 2,193 |
<issue_start>username_0: Example of the nested dictionary with the keys to be removed.
```
{1: {'Email': '<EMAIL>',
'FirstName': 'John',
'Id': {'Value': 1},
'LastName': 'Doe',
'UserName': 'JohnDoe'},
2: {'Email': '<EMAIL>',
'FirstName': 'Jane',
'Id': {'Value': 2},
'LastName': 'Doe',
'UserName': 'JaneDoe'},
3: {'Email': '<EMAIL>',
'FirstName': 'Fred',
'Id': {'Value': 1},
'LastName': 'Doe',
'UserName': 'FredDoe'}}
```
Is it possible to remove the numeric keys and save the dictionary like below?
```
{{'Email': '<EMAIL>',
'FirstName': 'John',
'Id': {'Value': 1},
'LastName': 'Doe',
'UserName': 'JohnDoe'},
{'Email': '<EMAIL>',
'FirstName': 'Jane',
'Id': {'Value': 2},
'LastName': 'Doe',
'UserName': 'JaneDoe'},
{'Email': '<EMAIL>',
'FirstName': 'Fred',
'Id': {'Value': 1},
'LastName': 'Doe',
'UserName': 'FredDoe'}}
```<issue_comment>username_1: I think you want to create a dictionary that only consisted of keys rather than keys and values
Upvotes: 1 <issue_comment>username_2: Assuming the dictionary is in a variable `d`, all you have to do is [`d.values()`](https://docs.python.org/3.6/library/stdtypes.html#dict.values), which will give you the values of each of the `key, value` pairs from the dictionary:
```
>>> d = {1: {'Email': '<EMAIL>', 'FirstName': 'John', 'Id': {'Value': 1}, 'LastName': 'Doe', 'UserName': 'JohnDoe'}, 2: {'Email': '<EMAIL>', 'FirstName': 'Jane', 'Id': {'Value': 2}, 'LastName': 'Doe', 'UserName': 'JaneDoe'}, 3: {'Email': '<EMAIL>', 'FirstName': 'Fred', 'Id': {'Value': 1}, 'LastName': 'Doe', 'UserName': 'FredDoe'}}
>>> l = list(d.values())
>>> l
[{'Email': '<EMAIL>', 'FirstName': 'John', 'Id': {'Value': 1}, 'LastName': 'Doe', 'UserName': 'JohnDoe'}, {'Email': '<EMAIL>', 'FirstName': 'Jane', 'Id': {'Value': 2}, 'LastName': 'Doe', 'UserName': 'JaneDoe'}, {'Email': '<EMAIL>', 'FirstName': 'Fred', 'Id': {'Value': 1}, 'LastName': 'Doe', 'UserName': 'FredDoe'}]
```
Upvotes: 3 [selected_answer]
|
2018/03/18
| 740 | 2,223 |
<issue_start>username_0: Let's say I have the following data.table.
```
dt = data.table(one=rep(2,4), two=rnorm(4))
dt
```
Now I have created a variable with a name of one column.
```
col_name = "one"
```
If I want to return that column as a data.table, I can do one of the following. The first option will return the column name as V1 and the second will actually set the column name to "one".
```
dt[,.(get(col_name))]
dt[,col_name, with=FALSE]
```
I'm wondering if there is a way to specify the column name which using the get command. Something like the following, which doesn't work.
```
dt[,as.symbol(col_name) = .(get(col_name))]
```
The reason that I need the column names with get is that I have pretty extensive loop whereby I'm filling in empty columns. So it could end up looking like this, whereby I loop through and replace imp\_val with the median by the columns in cols.
```
dat2[is.na(get(imp_val)),
as.symbol(imp_val) := dat2[.BY, median(get(imp_val), na.rm=TRUE), on=get(cols)], by=c(get(cols))]
```<issue_comment>username_1: I think you want to create a dictionary that only consisted of keys rather than keys and values
Upvotes: 1 <issue_comment>username_2: Assuming the dictionary is in a variable `d`, all you have to do is [`d.values()`](https://docs.python.org/3.6/library/stdtypes.html#dict.values), which will give you the values of each of the `key, value` pairs from the dictionary:
```
>>> d = {1: {'Email': '<EMAIL>', 'FirstName': 'John', 'Id': {'Value': 1}, 'LastName': 'Doe', 'UserName': 'JohnDoe'}, 2: {'Email': '<EMAIL>', 'FirstName': 'Jane', 'Id': {'Value': 2}, 'LastName': 'Doe', 'UserName': 'JaneDoe'}, 3: {'Email': '<EMAIL>', 'FirstName': 'Fred', 'Id': {'Value': 1}, 'LastName': 'Doe', 'UserName': 'FredDoe'}}
>>> l = list(d.values())
>>> l
[{'Email': '<EMAIL>', 'FirstName': 'John', 'Id': {'Value': 1}, 'LastName': 'Doe', 'UserName': 'JohnDoe'}, {'Email': '<EMAIL>', 'FirstName': 'Jane', 'Id': {'Value': 2}, 'LastName': 'Doe', 'UserName': 'JaneDoe'}, {'Email': '<EMAIL>', 'FirstName': 'Fred', 'Id': {'Value': 1}, 'LastName': 'Doe', 'UserName': 'FredDoe'}]
```
Upvotes: 3 [selected_answer]
|
2018/03/18
| 822 | 3,176 |
<issue_start>username_0: I am a newbie to `Jasmine` and a bit confused between above two functions. My sole purpose is to give a fake implementation to a spy function. But, If I put debugger in `callFake` the it is getting called but `and.stub`'s function is not getting called. Could anyone please explain what is the difference between these two functions.
```
spyOn(manager, 'getUsers').and.stub(function () {
//to do
});
```
vs
```
spyOn(manager, 'getUsers').and.callFake(function () {
//to do
});
```<issue_comment>username_1: Looking at the documentation located at <https://jasmine.github.io/2.0/introduction.html#section-Spies>, when you `spyOn` something it logs of all the calls being made on the spied on object method. This means that it is calling the actual method of the object, but keeping track of what calls were made.
If you want to allow using the original object, but don't want specific methods to be called, you have the options of using `and.callFake` and `and.stub`. The differences are in the method signatures.
---
`callFake` takes a function as a parameter. This allows you to fake the method call and return a value of your desire.
original method signature is `myMethod(param1: string): string`
```
spyOn(service, 'myMethod').and.callFake((param1) => {
expect(param1).toBe('value');
return 'returnValue';
});
```
---
`stub` has no parameters and merely intercepts the call to the method
```
spyOn(service, 'myMethod').and.stub();
```
myMethod can have parameters and can have a return type, but it doesn't matter since stub just intercepts the call and will return `null` if there is a return type.
---
In both instances, the method calls are logged and you can then do something like `expect(service.myMethod).toHaveBeenCalled()` or `expect(service.myMethod).toHaveBeenCalledWith('value')`
Upvotes: 4 <issue_comment>username_2: So, `let mySpy = spyOn(service, 'myMethod').and.stub()` is the exact same as `let mySpy = spyOn(service, 'myMethod')`. Both of these do the same 2 things:
1 - they will wait until that function is called and can report back on it later and tell you if it was called via `expect(mySpy).toHaveBeenCalled()`.
2 - they intercept the function and NEVER call it. So if you spy on a function, it doesn't actually get called. Which is extremely useful when paired with `let mySpy = spyOn(service, 'myMethod').and.returnValue(someUsefulReturnValue)`.
However, as a side note, in some situations you want to spy on a function to see if it was called, write an expectation that it was called, BUT actually allow it to get called, which is when you would use `let mySpy = spyOn(service, 'myMethod').and.callThrough()`
Whereas, `let mySpy = spyOn(service, 'myMethod').and.callFake()` does the same 2 things in the bullet points above, except with one benefit. You can call a function inside of `callFake()` that will execute inside of `myMethod`. I find this is not extremely useful when following programming principles, and is used less often.
I.e. `let mySpy = spyOn(service, 'myMethod').and.callFake(() => { this.somethingThatNeedsToBeUpdated = somethingUseful; })`
Upvotes: 0
|
2018/03/18
| 291 | 1,034 |
<issue_start>username_0: I created a question component. In that component I put some hard-coded questions. When I try to run the application, it shows text undefined.
Here is my component:
```
import { Component, OnInit } from '@angular/core';
@Component({
selector: 'app-question-list',
templateUrl: './question-list.component.html',
styleUrls: ['./question-list.component.css']
})
export class QuestionListComponent implements OnInit {
questions: Object[];
constructor() {
this.questions = [
{
text: 'What is your name?',
answer: 'My name is Saeef'
},
{
text: 'What is your favorite color?',
answer: 'My favorite color is blue'
];
}
ngOnInit() {
}
}
```
My HTML:
```
{{ question.text }}
{{ question.answer }}
```<issue_comment>username_1: Your **`ngFor`** syntax is wrong,
It should be as,
```
{{ question.text }}
{{ question.answer }}
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: use \*ngfor and following:-
{{ question['text'] }} {{ question['answer'] }}
Upvotes: 0
|
2018/03/18
| 288 | 950 |
<issue_start>username_0: I got this example from one website. And I am about to upgrade the security of my code. So May I have your opinion if this kinda code strong enough to prevent the injection?
```
$sql = sprintf(
"
INSERT INTO `members`
(`id`, `username`, `password`, `first_name`, `last_name`)
VALUES
('', '%s', '%s', '%s', '%s')
",
mysql_real_escape_string($con,$_POST['username']), // %s #1
mysql_real_escape_string($con,$_POST['password']), // %s #2
mysql_real_escape_string($con,$_POST['first_name']), // %s #3
mysql_real_escape_string($con,$_POST['last_name']) // %s #4
);
```
`$con` is from the mysql connection<issue_comment>username_1: Your **`ngFor`** syntax is wrong,
It should be as,
```
{{ question.text }}
{{ question.answer }}
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: use \*ngfor and following:-
{{ question['text'] }} {{ question['answer'] }}
Upvotes: 0
|
2018/03/18
| 318 | 926 |
<issue_start>username_0: How can I get the source, title, issn, author, ... from a json file: [JSON file](https://eutils.ncbi.nlm.nih.gov/entrez/eutils/esummary.fcgi?db=pubmed&id=29539636&retmode=json)
We tried with:
```
$new_pmid = $_POST['new_pmid'];
$api_json_url = "https://eutils.ncbi.nlm.nih.gov/entrez/eutils/esummary.fcgi?db=pubmed&id=".$new_pmid."&retmode=json";
$json = file_get_contents($api_json_url);
$data = json_decode($json, TRUE);
echo $header[0]->result->$new_pmid->title;
....
```
But nothing happen...
Can you give me the solution for the json file (generated from pubmed database).
Thank you.<issue_comment>username_1: Your **`ngFor`** syntax is wrong,
It should be as,
```
{{ question.text }}
{{ question.answer }}
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: use \*ngfor and following:-
{{ question['text'] }} {{ question['answer'] }}
Upvotes: 0
|
2018/03/18
| 1,670 | 6,032 |
<issue_start>username_0: Sorry for the extremely basic question, but I want to figure out how to use a switch statement with cases that check if it's a certain string.
For example if I have an AnimalType enum and then I have an animal struct:
```
enum AnimalType: String {
case Mammal = "Mammal"
case Reptile = "Reptile"
case Fish = "Fish"
}
struct Animal {
let name: String
let type: String
}
```
If I want to go through a list of Animals and then have a switch statement, how would I match the Animal.Type string to the enum? I don't want to change the Animal struct to let type: AnimalType either.
```
switch Animal.type {
case :
...// how do I match the string to the enum?
```<issue_comment>username_1: ```
enum Command: String {
case Mammal = "Mammal"
case Reptile = "Reptile"
case Fish = "Fish"
}
let command = Command(rawValue: "d")
switch command {
case .Mammal?:
print("Mammal")
case .Reptile?:
print("second")
case .Fish?:
print("third")
case nil:
print("not found")
}
// prints "not found"
```
Upvotes: 3 <issue_comment>username_2: You can create an animal type from string rawValue and switch on that:
But first I'd change the cases to lowercase, thats the preferred style in Swift.
```
func checkType(of animal: Animal) {
guard let animalType = AnimalType(rawValue: animal.type) else {
print("Not an animal type")
return
}
switch animalType {
case .mammal: break
case .reptile: break
case .fish: break
}
}
```
Alternatively, you can also switch on the string and compare if it matches any of your AnimalType rawValues:
```
func checkType(of animal: Animal) {
switch animal.type {
case AnimalType.mammal.rawValue: break
case AnimalType.reptile.rawValue: break
case AnimalType.fish.rawValue: break
default:
print("Not an animal type")
break
}
}
```
Upvotes: 3 <issue_comment>username_3: I'd suggest you to use `enum` for comparison and making it optional for the `type`. You could also use `rawValue` of the enum to compare them.
```
enum AnimalType: String {
case Mammal //No need of case Mammal = "Mammal"
case Reptile
case Fish
}
struct Animal {
let name: String
let type: AnimalType?
}
let lion = Animal(name: "Lion", type: .Mammal)
switch lion.type {
case .Mammal?:
break
case .Reptile?:
break
case .Fish?:
break
case nil:
break
}
```
**EDIT:**
As Matthew in the comment said, if you’re getting the objects from a server, you need to have custom decoding process to convert the string response to corresponding `AnimalType` enum for comparison. Otherwise you’re good with using just the enum.
Upvotes: 1 <issue_comment>username_4: If the strings are exactly the same as the enum, you don't have to define it. Following is my working solution.
```
enum SegueIdentifier: String {
case pushViewController1
case pushViewController2
case pushViewController3
}
switch SegueIdentifier(rawValue: segue.identifier ?? "") {
case .pushViewController1:
if let viewController1 = segue.destination as? ViewController1 {
viewController1.customProperty = customPropertyValue
}
case .pushViewController2:
if let viewController2 = segue.destination as? ViewController2 {
viewController2.customProperty = customPropertyValue
}
case .pushViewController3:
if let viewController3 = segue.destination as? ViewController3 {
viewController3.customProperty = customPropertyValue
}
default:
let message = "This case has not been handled."
let alertController = UIAlertController(title: "Alert", message: message, preferredStyle: .alert)
let okayAction = UIAlertAction(title: "Okay", style: .default) { (_:UIAlertAction) in }
alertController.addAction(okayAction)
present(alertController, animated: true, completion: nil)
}
```
Upvotes: 0 <issue_comment>username_5: Paste the code below in your Playground to see how it works.
You can basically create a list with animals in it, and reach the rawValue of an AnimalType to check when the switch on each animal in your list matches like this:
```
import UIKit
enum AnimalType: String {
case mammal = "Mammal"
case reptile = "Reptile"
case fish = "Fish"
case cat = "Cat"
case bug = "Bug"
}
struct Animal {
let name: String
let type: String
}
// Create some animals list
let animals: [Animal] = [Animal(name: "Kiwi", type: "Cat"),
Animal(name: "Copy", type: "Cat"),
Animal(name: "Oebi", type: "Cat"),
Animal(name: "Zaza", type: "Bug")]
// For each animal in the list
for animal in animals {
// Check if its type actually matches a type from your enum
switch animal.type {
case AnimalType.cat.rawValue:
print("\(animal.name) is a \(AnimalType.cat.rawValue)")
case AnimalType.bug.rawValue:
print("\(animal.name) is a \(AnimalType.bug.rawValue)")
default:
"This is not a cat nor a bug"
}
}
```
Unless you want type of your Animal struct to be of AnimalType. If so, you can do this:
```
import UIKit
enum AnimalType: String {
case mammal = "Mammal"
case reptile = "Reptile"
case fish = "Fish"
case cat = "Cat"
case bug = "Bug"
}
struct Animal {
let name: String
let type: AnimalType
}
// Create some animals list
let animals: [Animal] = [Animal(name: "Kiwi", type: .cat),
Animal(name: "Copy", type: .cat),
Animal(name: "Oebi", type: .cat),
Animal(name: "Zaza", type: .bug)]
// For each animal in the list
for animal in animals {
// Check if its type actually matches a type from your enum
switch animal.type {
case .cat:
print("\(animal.name) is a \(AnimalType.cat.rawValue)")
case .bug:
print("\(animal.name) is a \(AnimalType.bug.rawValue)")
default:
"This is not a cat nor a bug"
}
}
```
Upvotes: 1
|
2018/03/18
| 1,518 | 5,488 |
<issue_start>username_0: This is what I want though can't find the proper syntax.
eg.
```
select * from campaign where campaign_display_start is minimum;
```<issue_comment>username_1: ```
enum Command: String {
case Mammal = "Mammal"
case Reptile = "Reptile"
case Fish = "Fish"
}
let command = Command(rawValue: "d")
switch command {
case .Mammal?:
print("Mammal")
case .Reptile?:
print("second")
case .Fish?:
print("third")
case nil:
print("not found")
}
// prints "not found"
```
Upvotes: 3 <issue_comment>username_2: You can create an animal type from string rawValue and switch on that:
But first I'd change the cases to lowercase, thats the preferred style in Swift.
```
func checkType(of animal: Animal) {
guard let animalType = AnimalType(rawValue: animal.type) else {
print("Not an animal type")
return
}
switch animalType {
case .mammal: break
case .reptile: break
case .fish: break
}
}
```
Alternatively, you can also switch on the string and compare if it matches any of your AnimalType rawValues:
```
func checkType(of animal: Animal) {
switch animal.type {
case AnimalType.mammal.rawValue: break
case AnimalType.reptile.rawValue: break
case AnimalType.fish.rawValue: break
default:
print("Not an animal type")
break
}
}
```
Upvotes: 3 <issue_comment>username_3: I'd suggest you to use `enum` for comparison and making it optional for the `type`. You could also use `rawValue` of the enum to compare them.
```
enum AnimalType: String {
case Mammal //No need of case Mammal = "Mammal"
case Reptile
case Fish
}
struct Animal {
let name: String
let type: AnimalType?
}
let lion = Animal(name: "Lion", type: .Mammal)
switch lion.type {
case .Mammal?:
break
case .Reptile?:
break
case .Fish?:
break
case nil:
break
}
```
**EDIT:**
As Matthew in the comment said, if you’re getting the objects from a server, you need to have custom decoding process to convert the string response to corresponding `AnimalType` enum for comparison. Otherwise you’re good with using just the enum.
Upvotes: 1 <issue_comment>username_4: If the strings are exactly the same as the enum, you don't have to define it. Following is my working solution.
```
enum SegueIdentifier: String {
case pushViewController1
case pushViewController2
case pushViewController3
}
switch SegueIdentifier(rawValue: segue.identifier ?? "") {
case .pushViewController1:
if let viewController1 = segue.destination as? ViewController1 {
viewController1.customProperty = customPropertyValue
}
case .pushViewController2:
if let viewController2 = segue.destination as? ViewController2 {
viewController2.customProperty = customPropertyValue
}
case .pushViewController3:
if let viewController3 = segue.destination as? ViewController3 {
viewController3.customProperty = customPropertyValue
}
default:
let message = "This case has not been handled."
let alertController = UIAlertController(title: "Alert", message: message, preferredStyle: .alert)
let okayAction = UIAlertAction(title: "Okay", style: .default) { (_:UIAlertAction) in }
alertController.addAction(okayAction)
present(alertController, animated: true, completion: nil)
}
```
Upvotes: 0 <issue_comment>username_5: Paste the code below in your Playground to see how it works.
You can basically create a list with animals in it, and reach the rawValue of an AnimalType to check when the switch on each animal in your list matches like this:
```
import UIKit
enum AnimalType: String {
case mammal = "Mammal"
case reptile = "Reptile"
case fish = "Fish"
case cat = "Cat"
case bug = "Bug"
}
struct Animal {
let name: String
let type: String
}
// Create some animals list
let animals: [Animal] = [Animal(name: "Kiwi", type: "Cat"),
Animal(name: "Copy", type: "Cat"),
Animal(name: "Oebi", type: "Cat"),
Animal(name: "Zaza", type: "Bug")]
// For each animal in the list
for animal in animals {
// Check if its type actually matches a type from your enum
switch animal.type {
case AnimalType.cat.rawValue:
print("\(animal.name) is a \(AnimalType.cat.rawValue)")
case AnimalType.bug.rawValue:
print("\(animal.name) is a \(AnimalType.bug.rawValue)")
default:
"This is not a cat nor a bug"
}
}
```
Unless you want type of your Animal struct to be of AnimalType. If so, you can do this:
```
import UIKit
enum AnimalType: String {
case mammal = "Mammal"
case reptile = "Reptile"
case fish = "Fish"
case cat = "Cat"
case bug = "Bug"
}
struct Animal {
let name: String
let type: AnimalType
}
// Create some animals list
let animals: [Animal] = [Animal(name: "Kiwi", type: .cat),
Animal(name: "Copy", type: .cat),
Animal(name: "Oebi", type: .cat),
Animal(name: "Zaza", type: .bug)]
// For each animal in the list
for animal in animals {
// Check if its type actually matches a type from your enum
switch animal.type {
case .cat:
print("\(animal.name) is a \(AnimalType.cat.rawValue)")
case .bug:
print("\(animal.name) is a \(AnimalType.bug.rawValue)")
default:
"This is not a cat nor a bug"
}
}
```
Upvotes: 1
|
2018/03/18
| 1,413 | 4,022 |
<issue_start>username_0: `Checksum()`, `Binary_Checksum()` and `CHECKSUM_AVG()` functions are provided by SQL Server to build a hash index based on an expression or a column list.
This can be helpful in determining whether a row has changed or not. The mechanism can then be used to identify whether the record has been updated or not.
I have found lots of example of collisions that are generated same hash value for different values. How we can identify the collision condition for these function.
Anyone have to clue about to which algorithm or technique used to generated/compute hash without hash collisions?<issue_comment>username_1: According to the [CHECKSUM weakness explained](http://weblogs.sqlteam.com/peterl/archive/2010/08/19/checksum-weakness-explained.aspx) article on sqlTeam:
>
> The built-in CHECKUM function in SQL Server is built on a series of 4 bit left rotational xor operations.
>
>
>
A [forum post from 2006](http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=70832) (linked in the article as well) posted by <NAME> inlcudes sql user defined functions that computes checksum. Author of the post claims to 100% compatibility with SQL Server's built in function (I haven't tested it myself).
In case the link goes dead, here is a copy of the relevant part:
>
> With text/varchar/image data, call with SELECT `BINARY\_CHECKSUM('abcdefghijklmnop')`, `dbo.fnPesoBinaryChecksum('abcdefghijklmnop')`
> With integer data, call with `SELECT BINARY\_CHECKSUM(123)`, `dbo.fnPesoBinaryChecksum(CAST(123 AS VARBINARY))`
> I haven't figured out how to calculate checksum for integers greater than 255 yet.
>
> ```
> CREATE FUNCTION dbo.fnPesoBinaryChecksum
> (
> @Data IMAGE
> )
> RETURNS INT
> AS
>
> BEGIN
> DECLARE @Index INT,
> @MaxIndex INT,
> @SUM BIGINT,
> @Overflow TINYINT
>
> SELECT @Index = 1,
> @MaxIndex = DATALENGTH(@Data),
> @SUM = 0
>
> WHILE @Index <= @MaxIndex
> SELECT @SUM = (16 * @SUM) ^ SUBSTRING(@Data, @Index, 1),
> @Overflow = @SUM / 4294967296,
> @SUM = @SUM - @Overflow * 4294967296,
> @SUM = @SUM ^ @Overflow,
> @Index = @Index + 1
>
> IF @SUM > 2147483647
> SET @SUM = @SUM - 4294967296
> ELSE IF @SUM BETWEEN 32768 AND 65535
> SET @SUM = @SUM - 65536
> ELSE IF @SUM BETWEEN 128 AND 255
> SET @SUM = @SUM - 256
>
> RETURN @SUM
> END
>
> ```
>
>
> >
> > Actually this is an improvement of MS function, since it accepts TEXT and IMAGE data.
> >
> >
> >
>
>
>
> ```
> CREATE FUNCTION [dbo].[fnPesoTextChecksum]
> (
> @Data TEXT
> )
> RETURNS INT
> AS
>
> BEGIN
> DECLARE @Index INT,
> @MaxIndex INT,
> @SUM BIGINT,
> @Overflow TINYINT
>
> SELECT @Index = 1,
> @MaxIndex = DATALENGTH(@Data),
> @SUM = 0
>
> WHILE @Index <= @MaxIndex
> SELECT @SUM = (16 * @SUM) ^ ASCII(SUBSTRING(@Data, @Index, 1)),
> @Overflow = @SUM / 4294967296,
> @SUM = @SUM - @Overflow * 4294967296,
> @SUM = @SUM ^ @Overflow,
> @Index = @Index + 1
>
> IF @SUM > 2147483647
> SET @SUM = @SUM - 4294967296
> ELSE IF @SUM BETWEEN 32768 AND 65535
> SET @SUM = @SUM - 65536
> ELSE IF @SUM BETWEEN 128 AND 255
> SET @SUM = @SUM - 256
>
> RETURN @SUM
> END
>
> ```
>
>
Another good read is [Exploring Hash Functions In SQL Server](https://web.archive.org/web/20220705010823/http://blog.kejser.org/exploring-hash-functions-in-sql-server/) by <NAME>, where the author checks the built in hash functions in sql server for speed an quality.
Upvotes: 2 <issue_comment>username_2: PHP implementation of BINARY\_CHECKSUM:
```php
$input = 'binary string';
$sum = 0;
for ($i = 0; $i < strlen($input); $i++) {
$sum = ($sum << 4) ^ ord($in[$i]);
$sum = ($sum & 0xffffffff) ^ ($sum >> 32);
}
return $sum > 0x7fffffff ? $sum - 0x100000000 : $sum;
```
Upvotes: 1
|
2018/03/18
| 416 | 1,590 |
<issue_start>username_0: Why do those "load" and "store" operators need an "align" attribute, and how does it work with memory alignment?
BTW, why do we need this operator since the underlying system will do memory aligning for us automatically?<issue_comment>username_1: The use of alignment is detailed in the spec:
>
> The alignment `memarg.` in load and store instructions does not affect the semantics. It is an indication that the offset `ea` at which the memory is accessed is intended to satisfy the property `ea mod 2^memarg.=0`. A WebAssembly implementation can use this hint to optimize for the intended use. Unaligned access violating that property is still allowed and must succeed regardless of the annotation. However, it may be substantially slower on some hardware.
>
>
>
There is no other semantic effect associated with alignment; misaligned and unaligned loads and stores still behave normally, it is simply a performance optimisation.
Upvotes: 2 <issue_comment>username_2: It's a promise to VM that `(baseAddr + memarg.offset) mod 2^memarg.align == 0` where `baseAddr` is argument form stack.
In other words, we virtually split our memory by blocks with size `2^memarg.align` bytes and promise VM that our actual address (`baseAddr + memarg.offset`) will be at the start of any block, not in the middle.
Since the maximum value of `memarg.align` is 3 size of a block (in bytes) could be on of {1, 2, 4, 8} (1 = 2^0, .., 8 = 2^3).
Also, you can find a good detailed explanation [here](https://rsms.me/wasm-intro#addressing-memory).
Upvotes: 4 [selected_answer]
|
2018/03/18
| 441 | 1,602 |
<issue_start>username_0: My problem can be reproduced as this:
```
struct MyClass {
template
MyClass(Ts&&..., int) {};
};
int main() {
MyClass mc{1, 2 }; // error: cannot convert from 'initializer list' to 'MyClass'
}
```
What's wrong with my code?<issue_comment>username_1: The use of alignment is detailed in the spec:
>
> The alignment `memarg.` in load and store instructions does not affect the semantics. It is an indication that the offset `ea` at which the memory is accessed is intended to satisfy the property `ea mod 2^memarg.=0`. A WebAssembly implementation can use this hint to optimize for the intended use. Unaligned access violating that property is still allowed and must succeed regardless of the annotation. However, it may be substantially slower on some hardware.
>
>
>
There is no other semantic effect associated with alignment; misaligned and unaligned loads and stores still behave normally, it is simply a performance optimisation.
Upvotes: 2 <issue_comment>username_2: It's a promise to VM that `(baseAddr + memarg.offset) mod 2^memarg.align == 0` where `baseAddr` is argument form stack.
In other words, we virtually split our memory by blocks with size `2^memarg.align` bytes and promise VM that our actual address (`baseAddr + memarg.offset`) will be at the start of any block, not in the middle.
Since the maximum value of `memarg.align` is 3 size of a block (in bytes) could be on of {1, 2, 4, 8} (1 = 2^0, .., 8 = 2^3).
Also, you can find a good detailed explanation [here](https://rsms.me/wasm-intro#addressing-memory).
Upvotes: 4 [selected_answer]
|
2018/03/18
| 955 | 2,590 |
<issue_start>username_0: I have a stacked MultiRNNCell defined as below :
```
batch_size = 256
rnn_size = 512
keep_prob = 0.5
lstm_1 = tf.nn.rnn_cell.LSTMCell(rnn_size)
lstm_dropout_1 = tf.nn.rnn_cell.DropoutWrapper(lstm_1, output_keep_prob = keep_prob)
lstm_2 = tf.nn.rnn_cell.LSTMCell(rnn_size)
lstm_dropout_2 = tf.nn.rnn_cell.DropoutWrapper(lstm_2, output_keep_prob = keep_prob)
stacked_lstm = tf.nn.rnn_cell.MultiRNNCell([lstm_dropout_1, lstm_dropout_2])
rnn_inputs = tf.nn.embedding_lookup(embedding_matrix, ques_placeholder)
init_state = stacked_lstm.zero_state(batch_size, tf.float32)
rnn_outputs, final_state = tf.nn.dynamic_rnn(stacked_lstm, rnn_inputs, initial_state=init_state)
```
In this code, there are two RNN layers. I just want to process the final state of this dynamic RNN. I expected the state to be a 2D tensor of shape `[batch_size, rnn_size*2]`.
The shape of the final\_state is 4D - `[2,2,256,512]`
Can someone please explain why am I getting this shape ? Also, how can I process this tensor so that I can pass it through a fully\_connected layer ?<issue_comment>username_1: I can't reproduce the `[2,2,256,512]` shape. But with this piece of code:
```
rnn_size = 512
batch_size = 256
time_size = 5
input_size = 2
keep_prob = 0.5
lstm_1 = tf.nn.rnn_cell.LSTMCell(rnn_size)
lstm_dropout_1 = tf.nn.rnn_cell.DropoutWrapper(lstm_1, output_keep_prob=keep_prob)
lstm_2 = tf.nn.rnn_cell.LSTMCell(rnn_size)
stacked_lstm = tf.nn.rnn_cell.MultiRNNCell([lstm_dropout_1, lstm_2])
rnn_inputs = tf.placeholder(tf.float32, shape=[None, time_size, input_size])
# Shape of the rnn_inputs is (batch_size, time_size, input_size)
init_state = stacked_lstm.zero_state(batch_size, tf.float32)
rnn_outputs, final_state = tf.nn.dynamic_rnn(stacked_lstm, rnn_inputs, initial_state=init_state)
print(rnn_outputs)
print(final_state)
```
I get the right shape for `run_outputs`: `(batch_size, time_size, rnn_size)`
```
Tensor("rnn/transpose_1:0", shape=(256, 5, 512), dtype=float32)
```
The `final_state` is indeed a pair of `LSTMStateTuple` (for the 2 cells `lstm_dropout_1` and `lstm_2`):
```
(LSTMStateTuple(c=, h=),
LSTMStateTuple(c=, h=))
```
as described in the string doc of `tf.nn.dynamic_run`:
```
# 'outputs' is a tensor of shape [batch_size, max_time, 256]
# 'state' is a N-tuple where N is the number of LSTMCells containing a
# tf.contrib.rnn.LSTMStateTuple for each cell
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: Don't have enough rep for comment..
Final state is:
`[depth, lstmtuple.c and .h, batch_size, rnn_size]`
Upvotes: 0
|
2018/03/18
| 361 | 1,598 |
<issue_start>username_0: I am currently learning spring data can anyone explain what the difference between them `@NamedQuery` vs `@NamedNativeQuery` and When to use them?
Can we use native SQL in `@NamedQuery`?<issue_comment>username_1: `@NamedNativeQuery` is for writing SQL query, while `@NamedQuery` is for writing HQL query.
So in your case, for writing native SQL query you would be needing `@NamedNativeQuery`.
However, be aware that writing native SQL will make your application less flexible and a portability nightmare because you are tying your application to work only for a particular database. If you are absolutely sure that you won't be needing to migrate to a different database, then this might not be a big concern for you. Otherwise prefer native HQL over native SQL.
Upvotes: 2 [selected_answer]<issue_comment>username_2: `@NamedQuery`is for writing your queries in JPQL, the standard query language of the JPA standard. This is the default and you should stick to these queries whenever possible, because they are independent of the exact DBMS which you use.
`@NamedNativeQuery` on the other hand is for writing native queries (in SQL) which potentially can be DBMS specific.
It is a tradeoff. `@NamedQuery` gives you portability because you stick to the standard. `@NamedNativeQuery` gives you flexibility but you leave the standard and potentially lose portability. In case you switch to another DBMS, it is likely you would need to rewrite some of your `@NamedNativeQuery` definitions.
So we should only use `@NamedNativeQuery` if it is really necessary.
Upvotes: 2
|
2018/03/18
| 525 | 1,685 |
<issue_start>username_0: I'm trying to do a roleReact bot. The stock emojis are working and I based some on the official document of `discord.js`, but when it comes to custom emoji reactions I keep getting the result
>
> client undefined
>
>
>
This is my code so far:
```
case "react":
// this works
message.react("").then(reaction => console.log(typeof reaction));
//I keep getting a client undefiend error here
message.react( client.emojis.get("410431571083132933")).then(reaction => console.log(typeof reaction));
message.react("410431571083132933").then(reaction => console.log(typeof reaction));
break;
```
Does anybody know of a solution to this problem?<issue_comment>username_1: Are you sure thats a valid Emoji ID?
You can't right click on a emoji and Copy ID. That would copy the ID of the message.
To get the ID of a Emoji type `\:emoji:` => `<:emoji:123123123123>`
With that ID you can react a message.
```
message.react(client.emojis.get("123123123123"))
.then(reaction => console.log(typeof reaction));
```
Or maybe if you copy and pasted that code from somewhere, `client` is the default for the `Discord.Client()`, but some people use `bot` instead.
Give it a try with `bot` instead of `client`
Upvotes: 2 <issue_comment>username_2: Your bot must be on server with some custom emoji you want to use
Upvotes: 0 <issue_comment>username_3: You can remove the whole getting emojis part, see if this works:
```
message.react("<:emojiNameGoesHere: 410431571083132933>").then(reaction => console.log(typeof reaction));
```
This is the text that actually is sent, and discord just erases it client side and replaces it with a emoji.
Upvotes: 2
|
2018/03/18
| 533 | 1,636 |
<issue_start>username_0: I have 2 data frames with uneven index:
[](https://i.stack.imgur.com/TLrdq.jpg)
[](https://i.stack.imgur.com/WcwuF.jpg)
I wanna combine these 2 data frames, `df1` and `df2`, into `df3`, but there I couldn't do it with the below code:
```
df3 = pd.concat(df1,df2,axis=1)
```
Please, help: how to concatenate?
I want to arrive at this data frame:
[](https://i.stack.imgur.com/XipXG.jpg)<issue_comment>username_1: Are you sure thats a valid Emoji ID?
You can't right click on a emoji and Copy ID. That would copy the ID of the message.
To get the ID of a Emoji type `\:emoji:` => `<:emoji:123123123123>`
With that ID you can react a message.
```
message.react(client.emojis.get("123123123123"))
.then(reaction => console.log(typeof reaction));
```
Or maybe if you copy and pasted that code from somewhere, `client` is the default for the `Discord.Client()`, but some people use `bot` instead.
Give it a try with `bot` instead of `client`
Upvotes: 2 <issue_comment>username_2: Your bot must be on server with some custom emoji you want to use
Upvotes: 0 <issue_comment>username_3: You can remove the whole getting emojis part, see if this works:
```
message.react("<:emojiNameGoesHere: 410431571083132933>").then(reaction => console.log(typeof reaction));
```
This is the text that actually is sent, and discord just erases it client side and replaces it with a emoji.
Upvotes: 2
|
2018/03/18
| 2,584 | 8,818 |
<issue_start>username_0: I can't figure out why my one item list "\_head->next" is not being set to null (nullptr). Could I be confusing my tail and head pointers? This is for a doubly linked list. Most other pointers work, but this error/bug throws a seg fault when walking through the list.
```
#pragma once
class dlist {
public:
dlist() { }
struct node {
int value;
node* next;
node* prev;
};
node* head() const { return _head; }
node* tail() const { return _tail; }
void insert(node* prev, int value){
//if empty
if(prev == nullptr)
{
//create node/prepare to insert into head ptr
_head = new node{value, _head, nullptr};
if(_tail == nullptr)
{
_tail = _head;
_tail->next = nullptr;
}
//set head
if(_head->next != nullptr)
{
_head->prev = _head;
_head->next = nullptr;
//_head->next->next = nullptr;
//above doesn't work
}
}
//if not empty
else
{
//create and set new node
node* node1 = new node{value, prev->next, nullptr};
prev->next = node1;
//check if null, to make sure inserting after a filled "space"
if(node1->next != nullptr)
{
node1->next->prev = node1;
//set to end or tail
if(prev == _tail)
{
_tail = node1;
_tail->next = nullptr;
}
}
}
}
private:
node* _head = nullptr;
node* _tail = nullptr;
};
```<issue_comment>username_1: This is kind of a hard one to get your head around given the initial code you posted and the ambiguity in your use of `prev` as a parameter to `insert`.
To begin, `insert` is a member function of `dlist` and has direct access to both `_head` and `_tail`. You do not need a `node*` parameter for `insert` because your only concern with the linked list will be checking whether `_head` is `nullptr` and allocating/assigning `value` for `_head`, or you will iterate until your `iter->next` is `nullptr` and add the allocated node as `iter->next` setting `_tail` to the newly allocated node.
Most of your existing code, frankly, just left me scratching my head. Also, the default for `class` is `private`, so generally you only need to designate `public` members.
Putting the logic together, you could do something like the following:
```
class dlist {
struct node {
int value;
node* next;
node* prev;
};
node* _head = nullptr;
node* _tail = nullptr;
public:
dlist() { }
node* head() const { return _head; }
node* tail() const { return _tail; }
void insert (int value)
{
node *newnode = new node {value, nullptr, nullptr};
if (_head == nullptr)
_head = newnode;
else {
node* iter = _head;
for (; iter->next; iter = iter->next) {}
newnode->prev = iter;
_tail = iter->next = newnode;
}
}
};
```
When you allocate memory in a class, in order to prevent leaking memory like a sieve, you need to also declare a destructor that will free the memory you have allocated when an instance of the class goes out of scope. Nothing fancy is needed, just iterate from `_head` to the end of the list and `delete` nodes as you go.
(**note:** you must not `delete` your reference to the current node until you have saved a reference to the next node, so use a separate, aptly named `victim` node to perform the `delete`)
You could do something like:
```
~dlist() {
node* iter = _head;
while (iter) {
node* victim = iter;
iter = iter->next;
delete victim;
}
}
```
Putting it altogether and adding a couple of functions to `print` and reverse or `rprint` the list, you could do something like the following:
```
#include
using namespace std;
class dlist {
struct node {
int value;
node\* next;
node\* prev;
};
node\* \_head = nullptr;
node\* \_tail = nullptr;
public:
dlist() { }
~dlist() {
node\* iter = \_head;
while (iter) {
node\* victim = iter;
iter = iter->next;
delete victim;
}
}
node\* head() const { return \_head; }
node\* tail() const { return \_tail; }
void insert (int value)
{
node \*newnode = new node {value, nullptr, nullptr};
if (\_head == nullptr)
\_head = newnode;
else {
node\* iter = \_head;
for (; iter->next; iter = iter->next) {}
newnode->prev = iter;
\_tail = iter->next = newnode;
}
}
void print () {
for (node\* iter = \_head; iter; iter = iter->next)
cout << " " << iter->value;
cout << "\n";
}
void rprint() {
for (node\* iter = \_tail; iter; iter = iter->prev)
cout << " " << iter->value;
cout << "\n";
}
};
int main (void) {
dlist list;
int tmp;
while (cin >> tmp)
list.insert(tmp);
list.print();
list.rprint();
}
```
**Example Use/Output**
```
$ echo "2 3 4 6 8 10" | ./bin/dlist
2 3 4 6 8 10
10 8 6 4 3 2
```
**Memory Use/Error Check**
In any code you write that dynamically allocates memory, you have 2 *responsibilities* regarding any block of memory allocated: (1) *always preserve a pointer or reference to the starting address* for the block of memory so, (2) it can be *freed* when it is no longer needed.
It is imperative that you use a memory error checking program to insure you properly use the memory you allocate, and to confirm that you free all the memory you have allocated.
For Linux `valgrind` is the normal choice. There are similar memory checkers for every platform. They are all simple to use, just run your program through it.
```
$ echo "2 3 4 6 8 10" | valgrind ./bin/dlist
==18878== Memcheck, a memory error detector
==18878== Copyright (C) 2002-2017, and GNU GPL'd, by <NAME> et al.
==18878== Using Valgrind-3.13.0 and LibVEX; rerun with -h for copyright info
==18878== Command: ./bin/dlist
==18878==
2 3 4 6 8 10
10 8 6 4 3 2
==18878==
==18878== HEAP SUMMARY:
==18878== in use at exit: 0 bytes in 0 blocks
==18878== total heap usage: 9 allocs, 9 frees, 77,968 bytes allocated
==18878==
==18878== All heap blocks were freed -- no leaks are possible
==18878==
==18878== For counts of detected and suppressed errors, rerun with: -v
==18878== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
```
Always confirm that you have freed all memory you have allocated and that there are no memory errors.
Upvotes: 1 <issue_comment>username_2: I stronlgy recommend not using *C-style pointers* in *C++* anymore. *C++* has *smart pointers* that do all the memory management for you.
Change your node to this:
```
class node {
node(int value) : value{value} {}
int value;
std::shared_ptr next;
std::weak\_ptr prev;
};
```
The `shared_ptr` will delete the object it holds as soon as no copy/instance of itself exist anymore (it uses a simple counter). The `weak_ptr` makes sure that you don't have circular references, which would mean your nodes never get deleted.
Also in your *dlist* class change the members accordingly:
```
std::shared_ptr \_head;
std::weak\_ptr \_tail;
```
Then change your getters to this:
```
std::shared_ptr head() const { return \_head; }
std::shared\_ptr tail() const { return \_tail.lock(); }
```
`weak_ptr::lock()` will increase the `shared_ptr` counter it belongs to and return a new `shared_ptr`, that then can be accessed without risk of losing the object. If the object was deleted already, it will return en *empty* `shared_ptr`.
Then change your insert method like this:
```
void insert (int value)
{
std::shared_ptr newnode = std::make\_shared(value);
if (\_tail) {
newnode->prev = \_tail;
\_tail->next = newnode;
\_tail = newnode;
}
else {
\_head = newnode;
\_tail = newnode;
}
}
```
This will set *\_head* and *\_tail* to the *newnode* if the list is empty, or add it to the end of your list otherwise.
Read more about [shared\_ptr](http://en.cppreference.com/w/cpp/memory/shared_ptr) and [weak\_ptr](http://en.cppreference.com/w/cpp/memory/weak_ptr) here.
//Edit: If you ever want to clear your list, you can simply do `_head = nullptr;`. This will cause the `shared_ptr` to decrease it's reference counter and delete the `node` it is holding (if the counter goes to 0), thus deleting the `shared_ptr next;` in that node and so on. `_tail` will automatically be *empty* as soon as the `shared_ptr` it belongs to is deconstructed. To be absolutely sure that your dlist class is in a clean empty state, you should still call `_tail = nullptr;`, because some other part of your code can hold a `shared_ptr` to any of the nodes, thus keeping `_tail` alive in your list class unless you explicitly clear it.
Upvotes: 1 [selected_answer]
|
2018/03/18
| 699 | 2,010 |
<issue_start>username_0: Suppose I have `L = [{'G'}, {'D'}, {'B','C'}]`.
I want to check if 'C' is in L.
I tried doing `{'C'} in L`, but it returns `False`.
How would I check if 'C' is in L?<issue_comment>username_1: You were matching a tuple with a string. What you have to do is -
1. Iterate the `list` of `set` using list comprehension
2. For each `set` check whether the search string exists
3. Take the `boolean` list of values and put it through `any()` function which returns `True` if any one of `boolean` items in the `list` is `True`.
You should do -
```
L = [{'G'}, {'D'}, {'B','C'}]
print(any([ 'C' in i for i in L]))
```
**Output**
```
True
```
For another example -
```
print(any([ 'X' in i for i in L]))
```
**Output**
```
False
```
Upvotes: 3 <issue_comment>username_2: You can also filter result basis on True False conditions :
```
L = [{'G'}, {'D'}, {'B','C'}]
word='C'
print([True if list(filter(lambda x:word in x,L)) else False])
```
output:
```
[True]
```
>
> **Update basis on comment:**
>
>
>
You can try this:
```
def union_similar(list_data,values_data):
checker = {}
final_ = []
for item in list_data:
for values in values_data:
if values in item:
if values not in checker:
checker[values] = [item]
else:
checker[values].append(item)
sub_ = set()
for check, no in checker.items():
if len(no) > 1:
for i in no:
sub_.update(*i)
list_data.remove(i)
else:
sub_.update(*no[0])
list_data.remove(no[0])
list_data.append(sub_)
return list_data
print(union_similar(L,c))
```
>
> output:
>
>
>
Test\_case 1:
```
L = [{'G'}, {'D'}, {'B','C'}]
c=['C','G']
```
output:
```
[{'D'}, {'B', 'C', 'G'}]
```
Test\_case\_2:
```
L = [{'G'}, {'D'}, {'B','C'},{'U','C'}]
c=['C','G']
```
output:
```
[{'D'}, {'U', 'B', 'G', 'C'}]
```
Upvotes: 0
|
2018/03/18
| 532 | 1,940 |
<issue_start>username_0: I have this code :
```
class mydict():
def __init__(self, dict, **kwargv):
for key, item in dict.items():
print key, item
for key, item in kwargv.items():
print key, item
test_dic = {1:'I', 2:'love'}
dic = mydict(test_dic, 3='python')
```
If I put `mydict(test_dic, name='python')` it works, but if it's an `int`, I got the following error:
>
> keyword can't be an expression
>
>
>
Can someone explain me the processes behind this ?<issue_comment>username_1: Variable names cannot start with a number and the kwargs inputs has to be valid variable names
Upvotes: 0 <issue_comment>username_2: ```
3='python'
```
is what is called a keyword argument. The keyword in this case is `3`. But keywords must be valid identifiers. [Valid identifiers](https://docs.python.org/3/reference/lexical_analysis.html#identifiers) are:
>
> the uppercase and lowercase letters A through Z, the underscore \_ and, except for the first character, the digits 0 through 9
>
>
>
Upvotes: 2 <issue_comment>username_3: In Python, keyword arguments names, like parameter names, have to be identifiers—the same rules as for variable names, function names, etc.
This isn't really explained in the tutorial. It is described precisely in [the reference](https://docs.python.org/3/reference/expressions.html#grammar-token-call), but that's not exactly novice-friendly.
Anyway, this means the shorthand `dict(key1=value1, key2=value2)` doesn't work for all dictionaries, just dictionaries whose keys are strings that are valid identifiers. Of course you can always just use a dict display, as you did with `test_dic`.
This usually comes up in more advanced code, where you try to call something like `func(**test_dic)`, which expands into the equivalent of `func(1='I', 2='love')` and gives you a very mysterious error, so… be glad you ran into this earlier, where it's easier to understand.
Upvotes: 0
|
2018/03/18
| 490 | 1,613 |
<issue_start>username_0: I have to take "-4, 2, -5, 0, 3" as input from the console in java and need to store in an array[]. Where array[0]=-4, array[1] = 2 and so on .
I am completely new to java need immediate help.<issue_comment>username_1: There is quite a bit to this, but assuming you have a way to read input, and you have that input as a String in your code already, and it's nicely formatted, then...
```
String input = "-4, 2, -5, 0, 3";
input.replaceAll("\\s",""); // Replaced all spaces/whitespace, now we have "-4,2,-5,0,3"
String[] arr = input.split(",") // Split on the commas
System.out.println(arr[0]); // Will print -4
System.out.println(arr[1]); // Will print 2, etc.
```
Hope this helps!
Upvotes: -1 <issue_comment>username_2: First of all, take the number of input from console then run the loop to get the inputs ,for example -
```
public static void main(String[] args) {
Scanner scanner = new Scanner(System.in);
System.out.print("Enter number of input : ");
Integer total = scanner.nextInt();
Integer[] array = new Integer[total];
for (int i=0; i
```
Upvotes: 1 <issue_comment>username_3: ```
import java.util.Scanner;
/**
*
* @author wathsara
*/
public class Example1 {
public static void main(String[] args) {
Scanner s=new Scanner(System.in);
String a = s.nextLine();
String[] b=(a.split(","));
for (int i = 0; i < b.length; i++) {
System.out.println(b[i]);
}
}
}
```
Try the above code. make sure you give the input in the form of
-4,5,6,4,8 this way.
Upvotes: 2
|
2018/03/18
| 807 | 2,417 |
<issue_start>username_0: I have an object that looks like this:
```
{
"1": "Technology",
"2": "Startup",
"3": "IT",
}
```
and I need to convert it to an array of objects that would look like this:
```
[
{id: 1, name: "Technology"},
{id: 2, name: "Startup"},
{id: 3, name: "IT"}
]
```
What would be the cleanest & efficient way to do this?<issue_comment>username_1: You can use `.map()` with `Object.keys()`:
```js
let data = {
"1": "Technology",
"2": "Startup",
"3": "IT",
};
let result = Object.keys(data)
.map(key => ({id: Number(key), name: data[key]}));
console.log(result);
```
**Useful Resources:**
* [`Array.prototype.map()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map)
* [`Object.keys()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/keys)
Upvotes: 5 [selected_answer]<issue_comment>username_2: Assuming your object instance is named `obj`:
```
Object.keys(obj).reduce((acc, curr) => {
return [...acc, { id: curr, name: obj[curr] }]
}, [])
```
Upvotes: 2 <issue_comment>username_3: the trivial way
```
var o = {
"1": "Technology",
"2": "Startup",
"3": "IT",
};
var arr = [];
for(var i in o) {
arr.push({
id: i,
number: o[i]
});
};
```
Upvotes: 1 <issue_comment>username_4: ```js
const words = {
told: 64,
mistake: 11,
thought: 16,
bad: 17
}
const results = []
Object.entries(words).map(val => results.push({
text: val[0],
value: val[1]
}))
console.log(results)
```
Upvotes: 0 <issue_comment>username_5: Nowadays you can use [`Object.entries()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/entries) to turn an object into an array of key-value pairs (also stored as an array).
This allows us to take advantage of [array destructuring](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Destructuring_assignment#array_destructuring) so we can make this a nice one-liner:
```js
const obj = {
"1": "Technology",
"2": "Startup",
"3": "IT",
};
const result = Object.entries(obj).map(([id, name]) => ({id: +id, name}));
console.log(result);
```
Adding in a [unary plus (`+`)](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Unary_plus) to turn the id (which is a string) into a number.
Upvotes: 1
|
2018/03/18
| 938 | 3,044 |
<issue_start>username_0: ```
RadialGradient gradient1 = new RadialGradient(
0, // focusAngle
.1, // focusDistance
80, // centerX
45, // centerY
120, // radius
false, // proportional
CycleMethod.REFLECT, // cycleMethod
new Stop(0, Color.YELLOW),
new Stop(1, Color.BLUE)
);
Slider slider = new Slider(0, 100, 50);
slider.setLayoutX(30);
slider.setLayoutY(90);
slider.setShowTickMarks(true);
slider.setShowTickLabels(true);
slider.setMajorTickUnit(100);
slider.setMinorTickCount(20);
slider.setBlockIncrement(5);
**gradient1.getRadius().bind(slider.valueProperty());**
root.getChildren().add(slider);
```
The line of code between asterisks gives errors. This code is part of a JavaFX application, and I have no idea why I can't bind radius to the slider value<issue_comment>username_1: You can use `.map()` with `Object.keys()`:
```js
let data = {
"1": "Technology",
"2": "Startup",
"3": "IT",
};
let result = Object.keys(data)
.map(key => ({id: Number(key), name: data[key]}));
console.log(result);
```
**Useful Resources:**
* [`Array.prototype.map()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map)
* [`Object.keys()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/keys)
Upvotes: 5 [selected_answer]<issue_comment>username_2: Assuming your object instance is named `obj`:
```
Object.keys(obj).reduce((acc, curr) => {
return [...acc, { id: curr, name: obj[curr] }]
}, [])
```
Upvotes: 2 <issue_comment>username_3: the trivial way
```
var o = {
"1": "Technology",
"2": "Startup",
"3": "IT",
};
var arr = [];
for(var i in o) {
arr.push({
id: i,
number: o[i]
});
};
```
Upvotes: 1 <issue_comment>username_4: ```js
const words = {
told: 64,
mistake: 11,
thought: 16,
bad: 17
}
const results = []
Object.entries(words).map(val => results.push({
text: val[0],
value: val[1]
}))
console.log(results)
```
Upvotes: 0 <issue_comment>username_5: Nowadays you can use [`Object.entries()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/entries) to turn an object into an array of key-value pairs (also stored as an array).
This allows us to take advantage of [array destructuring](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Destructuring_assignment#array_destructuring) so we can make this a nice one-liner:
```js
const obj = {
"1": "Technology",
"2": "Startup",
"3": "IT",
};
const result = Object.entries(obj).map(([id, name]) => ({id: +id, name}));
console.log(result);
```
Adding in a [unary plus (`+`)](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Unary_plus) to turn the id (which is a string) into a number.
Upvotes: 1
|
2018/03/18
| 1,157 | 4,020 |
<issue_start>username_0: When l try to use `git push`, an error reports:
>
> Fatal: fatal: unable to access '<https://github.com/xxx>': OpenSSL
> SSL\_connect: SSL\_ERROR\_SYSCALL in connection to github.com:443
>
>
>
My git version is 2.16.2 for windows
It worked well before this day, and reinstalling git seems to not work.
Can anyone help me with that? Thanks in advance!<issue_comment>username_1: After reinstalling Git did nothing, I found an [issue on GitHub](https://github.com/desktop/desktop/issues/3326) that helped me solve it.
In your terminal run this command first:
```
git config --global http.sslBackend "openssl"
```
Then this one:
```
git config --global http.sslCAInfo "C:\Program Files\Git\mingw64\ssl\cert.pem"
```
You may need to change your path depending on where you have it installed.
Upvotes: 6 <issue_comment>username_2: If you are behind a proxy, try the following:
```
git config --global --add remote.origin.proxy ""
```
Upvotes: 8 [selected_answer]<issue_comment>username_3: Opening a new terminal session worked for me
Upvotes: 5 <issue_comment>username_4: I solved this same problem changing the git url in 'clone' step - use SSH instead of HTTPS link.
Upvotes: 2 <issue_comment>username_5: if your using proxy try to go to run and enter inetcpl.cpl
then connections then Lan settings then advance
and now you see your proxy, use the http one.
then open Git Bash then enter this command
```
$ git config --global http.proxy
```
*if theres no output of it then the proxy in Git Bash is not set*
then set it with these command and use proxy and port shown in the 1st paragraph
```
$ git config --global http.proxy proxyaddress:port
```
then enter this command again
```
$ git config --global http.proxy
```
and there you go it is set
to reset the proxy on Git Bash just enter this command
```
$ git config --global --unset http.proxy
```
i was also having these problem lately i was using psiphon vpn on desktop
as a newbie it was also hard to find this solution
glad i could help. :)
Upvotes: 4 <issue_comment>username_6: Sometimes, it can be simply because your system was unable to connect to GitHub, possibly because you were not connected to the Internet (or you had a lousy connection).
(It's true - this problem can be reproduced. Just disconnect your LAN/WiFi and then do `git pull`, you will get the same error).
PS: This happened with me. Sometimes the problem is your internet connection rather than configs.
Upvotes: 3 <issue_comment>username_7: I recently installed git 2.24.1 and not sure what changes I made, but it was not allowing me to `clone` or `push` without admin account on my machine.
This solved my issue:
1. In your terminal run this command first:
```
git config --global http.sslBackend "openssl"
```
2. Then this one:
```
git config --global http.sslCAInfo "C:\Program Files\Git\mingw64\ssl\cert.pem"
```
Upvotes: 2 <issue_comment>username_8: ```
git config --global --add remote.origin.proxy "127.0.0.1:(proxy http port number)"
```
in ch
[](https://i.stack.imgur.com/CXWky.jpg)
add image @`afder cc`'s
Upvotes: 4 <issue_comment>username_9: I tried many answers and it still failed with the same error, including the following one:
```
git config --global http.sslBackend "openssl"
```
However, I tried reverting that by:
```
git config --global --unset-all http.sslBackend
```
then restart my laptop, and everything started working again.
I'm not exactly sure what went wrong initially, but lesson learned is that the correct config isn't the same for everyone.
Upvotes: 1 <issue_comment>username_10: For Shadowsocks users:
1. Click the Shadowsocks icon, go Preferences:
[](https://i.stack.imgur.com/dz4kj.png)
2. Suppose that your local proxy listen port is XXX, run this:
```
git config --global --add remote.origin.proxy "127.0.0.1:XXX"
```
Upvotes: 2
|
2018/03/18
| 482 | 1,656 |
<issue_start>username_0: I am making an API in which I need to check the token validity for a particular user and for that I am using DateTime function of php. I am saving a token and current time on login and I want token to be valid for 10 minutes, so when a user makes a request within first 10 minutes of its login I want to update the time validity of token.
```
$currentTime = new DateTime();
$curTime = $currentTime->format("Y-m-d H:i:s");
$time10MinutesAgo = new DateTime("10 minutes ago");
$time10Minutes = $time10MinutesAgo->format("Y-m-d H:i:s");
$token_time = $query1_r['token_time']; // value is stored in the db(mysql)
if (($curTime > $token_time) && ($token_time >= $time10Minutes)){}
```
first I was unable to pass the second condition but now even my page is not working.<issue_comment>username_1: Use epoch time values for time comparisons it's much easier to compare numbers instead of dates.
```
$currentTime = time();
$time10MinutesAgo = strtotime('-10 minutes');
$token_time = strtotime($query1_r['token_time']); // value is stored in the db(mysql)
if(($currentTime >$token_time) && ($token_time >= $time10MinutesAgo )){}
```
Upvotes: 2 <issue_comment>username_2: If you already have DateTime object, don't convert them to strings, just compare objects between each other, like this:
```php
$currentTime = new DateTime();
$time10MinutesAgo = new DateTime("10 minutes ago");
$tokenTime = new DateTime($query1_r['token_time']);
if ($time10MinutesAgo <= $tokenTime && $tokenTime <= $currentTime) {
echo "Token time is between now and now-10 minutes.";
}
```
`[**demo**](https://eval.in/975623)`
Upvotes: 2 [selected_answer]
|
2018/03/18
| 303 | 1,106 |
<issue_start>username_0: I'm following this tutorial:
<https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d>
It shows content pulled via Google jupyter notebook into G Drive. The tutorial shows folders/files in the G Drive folders. Some of the files/folders I've pulled into G Drive show up in my G Drive folder, but not all.
I've noticed that G Drive UI does not reflect/refresh to show new Drive content imported via notebook.
Is this a bug or feature?
[](https://i.stack.imgur.com/bPBNL.gif)<issue_comment>username_1: Run this on the notebook:
```
!google-drive-ocamlfuse -cc
```
This will clear the cache and refresh the files. Every time you change a file in drive folder, you should call this command and the mounted drive would refresh with the changes.
Upvotes: 3 <issue_comment>username_2: If you changed something in your drive after you did the `drive.mount('/content/drive')`, you have to remount it with `drive.mount('/content/drive', force_remount=True)`.
Upvotes: 2
|
2018/03/18
| 740 | 2,362 |
<issue_start>username_0: Given a list like this, where first column is the id and second is a string,
```
x = [ [1, ["cat","dog"]],
[2, ["dog", "mouse", "elephant"]],
[3, ["mouse", "giraffe"]] ]
```
I would like to know a way to efficiently group up all the distinct elements into another list.
My problem comes in because there is a complexity requirement that I have to meet.
O(UCK), where U is the number of items in the list, C is the maximum number of characters in any animal, K is the maximum amount of animals in a list.
Example output:
```
[ ["cat"],
["dog"],
["mouse"],
["elephant"],
["giraffe"] ]
```
My solution used a dictionary to do this:
```
distinctList = []
distinctDict = {}
for item in x:
for animal in item[1]:
if animal not in distinctDict:
distinctList.append(animal)
distinctDict[animal] = 1
```
However the complexity for this would become O(UKN), where N is the number of items in the dictionary. This complexity is larger than the required complexity.<issue_comment>username_1: You can do that with a set comprehension like:
### Code:
```
uniques = {animal for row in data for animal in row[1]}
```
### Test Code:
```
data = [[1, ["cat", "dog"]],
[2, ["dog", "mouse", "elephant"]],
[3, ["mouse", "giraffe"]]]
uniques = {animal for row in data for animal in row[1]}
print(uniques)
```
### Results:
```
{'cat', 'giraffe', 'mouse', 'dog', 'elephant'}
```
Upvotes: 2 <issue_comment>username_2: This returns a nested list just like your example output is a nested list.
```
#!python2
x = [[1, ["cat", "dog"]], [2, ["dog", "mouse", "elephant"]], [3, ["mouse", "giraffe"]]]
new_lst = []
for sublst in x:
for subsublst in sublst[1]:
if not any(subsublst in sublst for sublst in new_lst):
new_lst.append([subsublst]) # nested list
# new_lst.append(subsublst) # a list of strings
print new_lst
'''
[['cat'], ['dog'], ['mouse'], ['elephant'], ['giraffe']]
'''
```
Upvotes: 0 <issue_comment>username_3: ```
In [126]: data = [[1, ["cat", "dog"]],
...: [2, ["dog", "mouse", "elephant"]],
...: [3, ["mouse", "giraffe"]]]
In [127]: [[x] for x in {animal for row in data for animal in row[1]}]
Out[127]: [['giraffe'], ['mouse'], ['elephant'], ['cat'], ['dog']]
```
Upvotes: 0
|
2018/03/18
| 298 | 1,102 |
<issue_start>username_0: ```
import numpy as np
def main():
try:
date, price, open = np.loadtxt('CARG.csv', delimiter=',',
unpack=True, dtype='str')
x = 0
for eachDate in date:
saveLine = eachDate + ',' + price[x] + '\n'
saveFile = open('newCSV', 'a')
saveFile.write(saveLine)
saveFile.close()
x += 1
except Exception as e:
print(e)
main()
```<issue_comment>username_1: The problem is that you've named a local variable `open`, which shadows the builtin function of the same name—but then tried to use the builtin a couple lines later:
```
date, price, open = …
saveFile = open('newCSV', 'a')
```
So, instead of calling the builtin, you're calling the array. Which obviously doesn't work.
The solution is just to give your variable a different name.
Upvotes: 3 [selected_answer]<issue_comment>username_2: I had the same error and it helped me:
```
import io
with io.open('filename') as f:
#"Doing something you want to do with file"
```
Upvotes: 0
|
2018/03/18
| 794 | 2,373 |
<issue_start>username_0: I am working on a test docker-compose file to run a container with three mongod servers, which form a replica set (One is primary, one is secondary and one is arbiter).
My Dockerfile looks like :-
```
FROM mongo:3.6.2
MAINTAINER Shanty
RUN apt-get update && apt-get install -y netcat
COPY ./.docker/mongo_scripts /mongo_scripts
RUN chmod +rx /mongo_scripts/*.sh
EXPOSE 27091 27092 27093
ENTRYPOINT ["/mongo_scripts/rs.sh"]
```
and my script looks like :-
```
#!/bin/bash
# shell script to create a simple mongodb replica set (tested on osx)
mkdir -p /mongosvr/rs-0
mkdir -p /mongosvr/rs-1
mkdir -p /mongosvr/rs-2
mongod --dbpath /mongosvr/rs-0 --bind_ip_all --replSet "demo" --port 27091 --pidfilepath /mongosvr/rs-0.pid 2>&1 &
mongod --dbpath /mongosvr/rs-1 --bind_ip_all --replSet "demo" --port 27092 --pidfilepath /mongosvr/rs-1.pid 2>&1 &
mongod --dbpath /mongosvr/rs-2 --bind_ip_all --replSet "demo" --port 27093 --pidfilepath /mongosvr/rs-2.pid 2>&1 &
# wait a bit for the first server to come up
sleep 5
# call rs.initiate({...})
cfg="{
_id: 'demo',
members: [
{_id: 0, host: 'localhost:27091', 'priority':10},
{_id: 1, host: 'localhost:27092'},
{_id: 2, host: 'localhost:27093', arbiterOnly: true}
]
}"
mongo localhost:27091 <
```
Its a simple script running three mongod servers, and initialising replica set configuration.
But after the last command is successfully executed, the following
is logged :-
```
bye
I NETWORK [conn1] end connection 127.0.0.1:55178 (2 connections now open)
```
and my container stops:-
```
mongodb_rs exited with code 0
```
I guess its some issue with setting replica set config, after which some termination command is send....<issue_comment>username_1: The problem is that you've named a local variable `open`, which shadows the builtin function of the same name—but then tried to use the builtin a couple lines later:
```
date, price, open = …
saveFile = open('newCSV', 'a')
```
So, instead of calling the builtin, you're calling the array. Which obviously doesn't work.
The solution is just to give your variable a different name.
Upvotes: 3 [selected_answer]<issue_comment>username_2: I had the same error and it helped me:
```
import io
with io.open('filename') as f:
#"Doing something you want to do with file"
```
Upvotes: 0
|
2018/03/18
| 523 | 2,011 |
<issue_start>username_0: I want to inverse the visibility of a view on a SurfaceView upon user click. A click makes the view visible on screen. The next click makes it invisible, and so on.
I already tried `view.setVisibility(Integer.reverse())` and `view.setVisibility(View.GONE ^ view.getVisibility())`, but none of them work. The latter does not even compile.<issue_comment>username_1: A view's visibility is not binary, as in there are not just two states. Views can be in three states: VISIBLE, INVISIBLE, or GONE. Which two are you trying to switch between?
Normally you set a view's visibility with something like:
```
view.setVisibility(View.GONE);
```
This is actually equivalent to:
```
view.setVisibility(8);
```
becuase the three values are each public static ints in the View class. They are:
```
public static final int VISIBLE = 0x00000000;
public static final int INVISIBLE = 0x00000004;
public static final int GONE = 0x00000008;
```
They are each power's of 2, so in binary their values would be:
```
VISIBLE = ...0000;
INVISIBLE = ...0100;
GONE = ...1000;
```
So Integer.reverse() isn't really what you want. VISIBLE is always 0, and to go from INVISIBLE to GONE you need to shift left.
Upvotes: 3 [selected_answer]<issue_comment>username_2: XOR binary operation will be applicable in case of the current set state was the same integer value or zero (VISIBLE).
I've written a simple test to demonstrate how the toggle logic would be implemented.
```
@Test
fun toggleVisibility() {
// toggle GONE
var visibility = VISIBLE
visibility = visibility xor GONE
assertEquals(visibility, GONE)
visibility = visibility xor GONE
assertEquals(visibility, VISIBLE)
// toggle INVISIBLE
visibility = VISIBLE
visibility = visibility xor INVISIBLE
assertEquals(visibility, INVISIBLE)
visibility = visibility xor INVISIBLE
assertEquals(visibility, VISIBLE)
}
```
Upvotes: 0
|
2018/03/18
| 510 | 1,561 |
<issue_start>username_0: This code is not able to resolve `void K::func**(B&)**`.
```
struct K {
template void func(T &t);
};
struct B {
void ok();
};
// definition
template void K::func(T &t) { t->ok(); }
// getting: undefined reference to `void K::func**(B&)'
// nm says ' U void K::func**(B&)'
template <> void K::func**(B &b);
template <> void K::func(B &b);
int main() {
K k;
B b;
k.func(b);
};******
```
Note: Several other questions at SO were related to compilation; I
m stuck at linking . Please point me to duplicate if any.
UPDATE:
I'm following <http://en.cppreference.com/w/cpp/language/template_specialization> , look for functions g1 or g2 ;
UPDATE2: fixed 'func' to 'K::func' to fix 'out of class member template definition', still K::func can't be specialized.<issue_comment>username_1: You need to provide definition like this:
```
template <> void K::func**(B &b) {}**
```
Also the following two are equivalent only in the second case T is deduced from the parameter type:
```
template <> void K::func**(B &b);
template <> void K::func(B &b);**
```
Upvotes: 2 <issue_comment>username_2: It turns out that `template <> void K::func(B &b);` is only a declaration, for definition `template void K::func(B &b);` is needed. The updated code:
```
struct K {
template void func(T &t);
};
struct B {
void ok();
};
// definition
template void K::func(T &t) { t.ok(); }
// specialiation: Do NOT put template <> here !!!
template void K::func(B &b);
int main() {
K k;
B b;
k.func(b);
};
```
Upvotes: 1 [selected_answer]
|
2018/03/18
| 888 | 3,705 |
<issue_start>username_0: When I create Spring Boot Application it generates 2 classes:
```
@SpringBootApplication
public class App {
public static void main(String[] args) {
SpringApplication.run(App.class, args);
}
}
```
And the second on is test:
```
@RunWith(SpringRunner.class)
@SpringBootTest
public class AppTest {
@Test
public void contextLoads() {
}
}
```
1. Like you can notice contextLoads test is empty. How should I provide correct test for contextLoad? It should stay empty maybe? Is it correct? Or I should put something inside?
**UPDATE:**
2. Why this test should stay? If I have more tests in application I'll know if spring context is loaded or nope. Isn't it just *excessive*.
I readed answers like [this](https://stackoverflow.com/questions/46650268/how-to-test-main-class-of-spring-boot-application) but it didn't gave me a clear answer.<issue_comment>username_1: Leave it empty. If an exception occurs while loading the application context the test will fail. This test is just saying "Will my application load without throwing an exception"
### Update for Second Question
The reason you want to keep this test is that when something fails you want to find the root cause. If you application context is unable to load, you can assume all other tests that are failing are due to this. If the above test passes, you can assume that the failing tests are unrelated to loading the context.
Upvotes: 3 <issue_comment>username_2: Actually, the `main()` method of the Spring Boot Application is not covered by the unit/integration tests as the Spring Boot tests don't invoke the `main()` method to start the Spring Boot application.
Now I consider that all answers of this [post](https://stackoverflow.com/questions/46650268/how-to-test-main-class-of-spring-boot-application) seem overkill.
They want to add a test of the `main()` method to make a metric tool happy (Sonar).
Loading a Spring context and loading the application **takes time**.
Don't add it in each developer build just to win about 0.1% of coverage in your application.
I added an answer about that.
---
Beyond your simple example and the other post that you refer to, in absolute terms it may make sense to create a test for the `main()` method if you included some logic in. It is the case for example as you pass specific arguments to the application and that you handle them in the `main()` meethod.
Upvotes: 4 [selected_answer]<issue_comment>username_3: When you build a Spring boot application using Spring Initializer. It auto creates Application and its Test Class
```java
@SpringBootApplication
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}
@SpringBootTest
class ApplicationTest {
@Test
void contextLoads() {
}
}
```
Note the use of `@SpringBootTest` annotation on test class which tells Spring Boot to look for a main configuration class (one with @SpringBootApplication, for instance) and use that to start a Spring application context. Empty `contextLoads()` is a test to verify if the application is able to load Spring context successfully or not.
You do not need to provide any test cases for empty method as such. Though you can do something like this to verify your controller or service bean context:-
```java
@SpringBootTest
public class ApplicationContextTest {
@Autowired
private MyController myController;
@Autowired
private MyService myService;
@Test
public void contextLoads() throws Exception {
assertThat(myController).isNotNull();
assertThat(myService).isNotNull();
}
}
```
Upvotes: -1
|
2018/03/18
| 451 | 1,761 |
<issue_start>username_0: Okay so let's assume, for simplicity's sake that my folder structure looks like this:
```
project
- index.html
- css
- style.css
- scss
- style.scss
- img
- image.jpg
```
I want to use my **image.jpg** as a background for a div. I am working from **style.scss** and then monitoring the scss to css in **style.css**. If I use the path **../../img/image.jpg** it will be displayed as correct in **style.scss**, but since I am monitoring this to **style.css**, the image will not be displayed because the path to the image from **style.css** is **../img/image.jpg**. This goes both ways, If I use the path **../img/image.jpg** the image should not be displayed because the path is incorrect for **style.scss**. How can I make this work?<issue_comment>username_1: I'd personnally set the images url to fit the `style.css` needs.
In my point of view, `style.scss` is here only to provide more flexibility during the coding of your design. This way, i never use `.scss` files inside my HTML.
Taking the in mind that i always compile the `.scss` into `.css`, that makes no sense to me to set the path from the `.scss` file.
By the way, i know there are tools allowing to auto-compile `.scss` files before returning them compile from the server, but i'm not pretty fan of this solution, because more than requesting a file in HTTP GET, you'll need your server to compile code before returning it, so it'll obviously take a bit more time...
Upvotes: 2 [selected_answer]<issue_comment>username_2: One of the possible solutions for that to use absolute path to your images, I mean, like:
`/img/image.jpg`
therefore you will still have autocomplete in IDE
and css will be transpiled correctly
Upvotes: 0
|
2018/03/18
| 291 | 1,004 |
<issue_start>username_0: ```
.
.
.
//etc
```
In My Activity,
```
Toolbar toolbar = (Toolbar) findViewById(R.id.toolbar_top);
TextView textView = (TextView) toolbar.findViewById(R.id.toolbar_title);
textView.setText("My Title");
setSupportActionBar(toolbar);
getSupportActionBar().setDisplayShowTitleEnabled(false);
```
And then..,
The Toolbar is displaying like this..
[](https://i.stack.imgur.com/aOYQO.png)
It seems I'm using my customized toolbar in weird way. Where should I place my Toolbar else? I spent whole night trying to put custom toolbar. Any Suggestion?<issue_comment>username_1: Try this
```
xml version="1.0" encoding="utf-8"?
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Do not put the Toolbar inside the relative layout, but make the relative layout below the toolbar like below:
```
.
.
.
//etc
```
Upvotes: 2 <issue_comment>username_3: ```
```
Upvotes: 1
|
2018/03/18
| 470 | 1,632 |
<issue_start>username_0: I was trying to implement a similar structure with an idea to have a common trait to work with different Input instances regardless of their InType.
```
trait AnyInput {
type InType
val obj : InType
}
abstract class Input[T](obj: T) extends AnyInput {
type InType = T
}
case class InpImage(image: ByteStream) extends Input[ByteStream](image)
case class InpString(text: String) extends Input[String](text)
.
.
.
trait InputProcessor[T <: Input[T#InType]] {
...
}
```
and I get the "cyclic reference involving type T error" in the InputProcessor definition
It is important to notice, that there might be a couple different case class implementing Input[String] or Input[ByteStream]. So writing it out as
```
case class StringInput(s: String) extends Input[String](s)
case class IntInput(numb: Int) extends Input[Int](numb)
```
is not the best workaround<issue_comment>username_1: Maybe you can use
```
trait InputProcessor[S, T <: Input[S]] {
// ...
}
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: you can try something like this
```
trait InputProcessor[S <: AnyInput, T <: Input[S#InType]] {
implicit val evidence: T =:= S = implicitly[T =:= S]
}
```
and than the implementation would be:
```
class ImageProcessor extends InputProcessor[InpImage, InpImage] { }
```
you can read more about `=:=` here: <https://stackoverflow.com/a/3427759/245024>
EDIT:
added an example and the implementation to the evidence
notice that when extending `InputProcessor`, if you pass 2 different type parameters, the code won't compile (as you would expect)
Upvotes: 0
|
2018/03/18
| 470 | 1,465 |
<issue_start>username_0: For some reason I have a complex data in the **app.component.ts** file:
```
export class AppComponentimplements OnInit {
tests = {
'name': 'Bob',
'grade': '5th',
'score': [
['math', 'A', 'A+', 'A-'],
['english', 'B', 'B-', 'A'],
['french', 'A', 'A+', 'A'],
['chem', 'C', 'C', 'C'],
['sport', 'B', 'B', 'B']
]};
}
```
I want to show the 1st items under 'score' into a **select/option** dropdown button.
(These are: ['math', 'english', 'french', 'chem', 'sport'])
In the **app.component.html** file, the following code is one of my many tries. But sadly it does not work.
```
{{test.score[0]}}
```
How could I do it correctly?<issue_comment>username_1: Your HTML code is wrong.
Try this.
```
{{score[0]}}
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: Correct way, you should be able to get **selected value**:
```
{{test[0]}}
```
As **[ngValue]** you can use `test` if you want to get selected full array:`['math', 'A', 'A+', 'A-']`:
```
{{test[0]}}
```
or `test[0]` if you want first value of selected array(ex. `math`):
```
{{test[0]}}
```
[Code example](https://stackblitz.com/edit/angular-k9u7d8)
Also don't remember to include `ReactiveFormsModule` into AppModule. If we want to use **Reactive Forms**(formControl/formGroup directive...) as in code example
Upvotes: 0
|
2018/03/18
| 1,592 | 3,491 |
<issue_start>username_0: I am very new to `pandas` and trying to get the row `index` for the any `value` higher than the `lprice`. Can someone give me a quick idea on what I am doing wrong?
Dataframe
```
StrikePrice
0 40.00
1 50.00
2 60.00
3 70.00
4 80.00
5 90.00
6 100.00
7 110.00
8 120.00
9 130.00
10 140.00
11 150.00
12 160.00
13 170.00
14 180.00
15 190.00
16 200.00
17 210.00
18 220.00
19 230.00
20 240.00
```
Now I am trying to figure out how to get the `row index` for any `value` which is `higher` than the `lprice`
```
lprice = 99
for strike in df['StrikePrice']:
strike = float(strike)
# print(strike)
if strike >= lprice:
print('The high strike is:' + str(strike))
ce_1 = strike
print(df.index['StrikePrice' == ce_1])
```
The above gives `0` as the `index`
I am not sure what I am doing wrong here.<issue_comment>username_1: Using the `index` attribute after boolean slicing.
```
lprice = 99
df[df.StrikePrice >= lprice].index
Int64Index([6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], dtype='int64')
```
If you insist on iterating and finding when you've found it, you can modify your code:
```
lprice = 99
for idx, strike in df['StrikePrice'].iteritems():
strike = float(strike)
# print(strike)
if strike >= lprice:
print('The high strike is:' + str(strike))
ce_1 = strike
print(idx)
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: I think best is filter index by [`boolean indexing`](http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing):
```
a = df.index[df['StrikePrice'] >= 99]
#alternative
#a = df.index[df['StrikePrice'].ge(99)]
```
Your code should be changed similar:
```
lprice = 99
for strike in df['StrikePrice']:
if strike >= lprice:
print('The high strike is:' + str(strike))
print(df.index[df['StrikePrice'] == strike])
```
Upvotes: 2 <issue_comment>username_3: [numpy.where(condition[, x, y])](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.where.html) does exactly this if we specify only `condition`.
`np.where()` returns the tuple
`condition.nonzero()`, the indices where `condition` is True, if only `condition` is given.
```
In [36]: np.where(df.StrikePrice >= lprice)[0]
Out[36]: array([ 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], dtype=int64)
```
PS thanks [@username_2 for the hint](https://stackoverflow.com/questions/49345585/how-to-find-the-row-index-in-pandas-column/49346124?noredirect=1#comment85692081_49346124) -- `np.where()` returns numerical index positions instead of DF index values:
```
In [41]: df = pd.DataFrame({'val':np.random.rand(10)}, index=pd.date_range('2018-01-01', freq='9999S', periods=10))
In [42]: df
Out[42]:
val
2018-01-01 00:00:00 0.459097
2018-01-01 02:46:39 0.148380
2018-01-01 05:33:18 0.945564
2018-01-01 08:19:57 0.105181
2018-01-01 11:06:36 0.570019
2018-01-01 13:53:15 0.203373
2018-01-01 16:39:54 0.021001
2018-01-01 19:26:33 0.717460
2018-01-01 22:13:12 0.370547
2018-01-02 00:59:51 0.462997
In [43]: np.where(df['val']>0.5)[0]
Out[43]: array([2, 4, 7], dtype=int64)
```
**workaround:**
```
In [44]: df.index[np.where(df['val']>0.5)[0]]
Out[44]: DatetimeIndex(['2018-01-01 05:33:18', '2018-01-01 11:06:36', '2018-01-01 19:26:33'], dtype='datetime64[ns]', freq=None)
```
Upvotes: 2
|
2018/03/18
| 1,300 | 2,981 |
<issue_start>username_0: What is the purpose of empty return, step by step explanation is needed, when `printInorder(node.right)` is executed and node become null then where control will be transferred.
```
void printInorder(Node node)
{
if (node == null)
return;
printInorder(node.left);
System.out.print(node.key + " ");
printInorder(node.right);
}
```<issue_comment>username_1: Using the `index` attribute after boolean slicing.
```
lprice = 99
df[df.StrikePrice >= lprice].index
Int64Index([6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], dtype='int64')
```
If you insist on iterating and finding when you've found it, you can modify your code:
```
lprice = 99
for idx, strike in df['StrikePrice'].iteritems():
strike = float(strike)
# print(strike)
if strike >= lprice:
print('The high strike is:' + str(strike))
ce_1 = strike
print(idx)
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: I think best is filter index by [`boolean indexing`](http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing):
```
a = df.index[df['StrikePrice'] >= 99]
#alternative
#a = df.index[df['StrikePrice'].ge(99)]
```
Your code should be changed similar:
```
lprice = 99
for strike in df['StrikePrice']:
if strike >= lprice:
print('The high strike is:' + str(strike))
print(df.index[df['StrikePrice'] == strike])
```
Upvotes: 2 <issue_comment>username_3: [numpy.where(condition[, x, y])](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.where.html) does exactly this if we specify only `condition`.
`np.where()` returns the tuple
`condition.nonzero()`, the indices where `condition` is True, if only `condition` is given.
```
In [36]: np.where(df.StrikePrice >= lprice)[0]
Out[36]: array([ 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], dtype=int64)
```
PS thanks [@username_2 for the hint](https://stackoverflow.com/questions/49345585/how-to-find-the-row-index-in-pandas-column/49346124?noredirect=1#comment85692081_49346124) -- `np.where()` returns numerical index positions instead of DF index values:
```
In [41]: df = pd.DataFrame({'val':np.random.rand(10)}, index=pd.date_range('2018-01-01', freq='9999S', periods=10))
In [42]: df
Out[42]:
val
2018-01-01 00:00:00 0.459097
2018-01-01 02:46:39 0.148380
2018-01-01 05:33:18 0.945564
2018-01-01 08:19:57 0.105181
2018-01-01 11:06:36 0.570019
2018-01-01 13:53:15 0.203373
2018-01-01 16:39:54 0.021001
2018-01-01 19:26:33 0.717460
2018-01-01 22:13:12 0.370547
2018-01-02 00:59:51 0.462997
In [43]: np.where(df['val']>0.5)[0]
Out[43]: array([2, 4, 7], dtype=int64)
```
**workaround:**
```
In [44]: df.index[np.where(df['val']>0.5)[0]]
Out[44]: DatetimeIndex(['2018-01-01 05:33:18', '2018-01-01 11:06:36', '2018-01-01 19:26:33'], dtype='datetime64[ns]', freq=None)
```
Upvotes: 2
|
2018/03/18
| 1,435 | 5,065 |
<issue_start>username_0: Looking to filter a pivot table to be within a date range. The date filter is at the top of the pivot, with the main table having 3 columns. I have a picture with actual examples but can't upload here.
If I enter a date range of 1st Feb. 2018 - 1st March 2018 the filter works perfectly.
If I enter a date range of 1st Feb. 2018 - 28th Feb 2018 the filter misses out the 3rd Feb - 9th Feb, picking back up again for the remainder of the data from the 10th Feb.
Different date ranges produce variants of this behavior.
From my research online this type of filtering in VBA has a bug of some sort where the code reads the data in US date format, regardless of Excel settings & the data itself (hence the formatting code, without it causes a mismatch error). I've seen a couple of workarounds online such as using CLng but the method below is the closest I've got.
* The pivot table itself is on a worksheet called "Pivots". Columns
A-C, Date in cell B2, main table headers in row 4.
* The date range is on a worksheet called "Paretos", cell refs below.
* The table I'm working on here is PivotTable1
---
```
Sub FilterPivotDates()
Application.DisplayAlerts = False
Application.ScreenUpdating = False
Application.EnableEvents = False
Dim ws As Worksheet, ws2 As Worksheet, pt As PivotTable, pf As PivotField, PI As PivotItem
Dim FromDate As Date, ToDate As Date
Set ws = ThisWorkbook.Worksheets("Pivots")
Set ws2 = ThisWorkbook.Worksheets("Paretos")
FromDate = ws2.Range("B1").Value
ToDate = ws2.Range("E1").Value
pivno = 1
MCCol = 25
Set pt = ws.PivotTables("PivotTable" & pivno)
Set pf = pt.PivotFields("Date")
'On Error Resume Next
Do While pivno < 2 '25
Set pt = ws.PivotTables("PivotTable" & pivno)
Set pf = pt.PivotFields("Date")
pt.PivotFields("Date").ClearAllFilters
With pf
For Each PI In pf.PivotItems
If PI.Value >= Format(FromDate, "M/D/YYYY") And PI.Value <= Format(ToDate, "M/D/YYYY") Then PI.Visible = True Else PI.Visible = False
Next
End With
pivno = pivno + 1
Loop
Application.DisplayAlerts = True
Application.ScreenUpdating = True
Application.EnableEvents = True
End Sub
```
Stepping through using msgbox commands it seems the missing dates are failing on one of the date checks, so the AND function removes the entry. I can't work out whats going on.
Using Excel 2016<issue_comment>username_1: Excel stores Dates as numeric values, even though you might change the format of it from `"mm/dd/yyyy"` to `"dd-mmm-yy"` or to a monthly name like `"mmmm"`, it really doesn’t change the way Excel stored the value inside, which is Numeric.
For instance, take `01-Feb-2018`, if you **copy >> Paste Special (Values only)** to the neighbor cell, you’ll get `43132`.
In your case, the best way is to compare the numeric values of the dates.
In your code, replace your line:
```
If Pi.Value >= Format(FromDate, "M/D/YYYY") And Pi.Value <= Format(ToDate, "M/D/YYYY") Then ...
```
With:
```
If Pi.Value >= CDbl(FromDate) And Pi.Value <= CDbl(ToDate) Then ...
```
You could optimize your `Do While` loop a little:
```
Do While pivno < 2 '25
Set pt = ws.PivotTables("PivotTable" & pivno)
Set pf = pt.PivotFields("Date")
With pf
.ClearAllFilters
For Each pi In .PivotItems
' since you already used .ClearAllFilters, you don't need to use Visible = True,
' only hide the ones which are not within your desired dates range
If Not (pi.Value >= CDbl(FromDate) And pi.Value <= CDbl(ToDate)) Then pi.Visible = False
Next
End With
pivno = pivno + 1
Loop
```
Upvotes: 0 <issue_comment>username_2: Ok it appears I've found a workaround.
The source data used to create the pivot needs to be raw Excel numbers, 43108 or whatever rather than the date.
When this is done using CDbl(FromDate) & such appears to work.
Just an attempt to clarify, the issue stems from the pivot item name (or caption, or value etc.), when it is a date, not being able to be formatted or set or processed to anything other than a US date format. Trying to match any data to it doesn't seem to work, only changing the raw data as above & converting any filter criteria within the code using CDbl seems to get me anywhere.
Upvotes: 1 [selected_answer]<issue_comment>username_3: **PivotItems.Value** property returns a **String** value. **Format** function also returns a **String (Variant)** value.
Therefore, when you perform a comparison of *PI.Value* vs *FromDate* / *ToDate*, it seems that your code performs a TEXT comparison, not DATE comparison. In text comparison, some dates are rightfully "out of range".
Reproduce the simple table in picture below (ensure that all fields are set into text format before entering values), and you'll see that it can reproduce the same "buggy" behaviour as you mentioned.
Solution is, probably, to somehow convert the values you compare, back into date format.
[](https://i.stack.imgur.com/c8tRf.png)
Upvotes: 1
|
2018/03/18
| 856 | 3,136 |
<issue_start>username_0: (I want to automate a report download from Adwords. This report is not part of their API. So I thought of downloading it using browser automation.
I'm using Puppeteer for browser automation.
But the problem is my account is 2-factor authentication enabled (I can't disable is due to security policy). And hence each time i will have to enter the OTP or backup codes.
I tried adding the backup codes to an array and take one from it each time when required. But Google provides only 10 code at a time and hence it wont be fully automated.
Is any way I can fully automate the entire workflow?<issue_comment>username_1: The workflow cannot be shared just like that in an answer. Here are some alternative resources that will help you achieve the desired result.
You are looking for the [Reporting API](https://developers.google.com/adwords/api/docs/guides/reporting) provided by google developers, it will let you download the reports as you wish. There is a [rate limit](https://developers.google.com/adwords/api/docs/ratesheet) set for this api. You will also need to read more about the [OAuth refresh token](https://developers.google.com/identity/protocols/OAuth2WebServer).
[](https://i.stack.imgur.com/lmpRc.png)
Otherwise, If you want to automate the entire workflow using Puppeteer. I'd suggest automating the **generation of OTP/Backup codes** as part of workflow. That way you can have unlimited OTP codes available for you. Though it feels using their API would be the best choice.
If you have any related code, add them in your question and I'll be happy to update my answer accordingly.
Upvotes: 0 <issue_comment>username_2: Not sure it's still required, But as an alternative solution you can use python with selenium and pyotp(2 factor authentication ). Here I posted sample login + 2 factor auth. which can able to modify for the api as well.
```
import pyotp
from pyotp import TOTP
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Chrome('./chromedriver_v80.exe')
driver.get("https://www.example.com/")
wait = WebDriverWait(driver, 20)
login_username = driver.find_element_by_name("username")
login_username.clear()
login_username.send_keys(username)
login_passoword = driver.find_element_by_name("password")
login_passoword.clear()
login_passoword.send_keys(password)
driver.find_element_by_name("login").click()
# wait for the 2FA field to display
authField = wait.until(EC.presence_of_element_located((By.XPATH, "//*[@id='token']")))
# get the token from google authenticate
ga_secret = "<KEY>" #your GA 32 character hash
totp = TOTP(ga_secret)
token = totp.now()
print(token)
# enter the token in the UI
authField.send_keys(token)
# click on the button to complete 2FA
driver.find_element_by_xpath("//*[@id='token_login']").click()
driver.close()
```
Upvotes: 1
|
2018/03/18
| 789 | 2,707 |
<issue_start>username_0: My problem is i want to insert a value in a textbox but refer the element by their attribute value as you can see "text" but i am unable to insert a value in
```
HI
function z()
{
var age=document.getElementsByTagName("text").value;
if(age<=18)
{
alert("age is less Than and equal to 18 Year")
}
else
{
alert("age is more than 18 Year")
}
}
```
<issue_comment>username_1: You have an id for the input field, so get it through document.getElementById and then analyze its value.
```js
function z()
{
var age=document.getElementById("text").value;
if(age<=18)
{
alert("age is less Than and equal to 18 Year")
}
else
{
alert("age is more than 18 Year")
}
}
```
```html
HI
```
Upvotes: 0 <issue_comment>username_2: What you are trying to do is get element by tag name, "text" is not a tag name instead in your case the name of tag is "input". But instead of using tag name use id of tag to get that element. You have given id="text" to the element, so use getElementById("text").
Following is your code updated.
```
HI
function z() {
var age=document.getElementById("text").value;
if(age<=18)
{
alert("age is less Than and equal to 18 Year")
}
else
{
alert("age is more than 18 Year")
}
}
```
Upvotes: 2 <issue_comment>username_3: From the [w3c documentation](https://www.w3schools.com/jsref/met_document_getelementsbytagname.asp) :
>
> The getElementsByTagName() method returns a collection of all elements in the document with the specified tag name, as a NodeList object.
>
>
>
You cannot select by tag "text" as this is not a valid tag. Here a valid tag would be "input" or "p". If you still want to select by the type of your input, try [query selectors](https://developer.mozilla.org/en-US/docs/Web/API/Document/querySelector)
Upvotes: 1 [selected_answer]<issue_comment>username_4: Looking at your code, it tells that you need little clarification & reading about using DOM in javascript.
What is `Tag` in `getElementsByTagName`?
1. `Tag` is something defined in html to tell what is the control. Example , , ,etc
2. `getElementsByTagName` method will fetch all elements of that `tag` and any operation on the result will be performed on all of them.
To avoid this an `id` should be assigned to `html` elements and the `id` should be used in `JS` to get element and perform operations on it.
**Example**
```
var myObj = document.getElementById(‘ID_Og_Element’);
```
>
> Ref [getElementsByTagName W3School](https://www.w3schools.com/jsref/met_document_getelementsbytagname.asp)
>
>
>
Upvotes: 0
|
2018/03/18
| 3,287 | 10,974 |
<issue_start>username_0: I am developing a Flutter app, in which I need to show navigation to the user for a place. So, how can I open a map application from my Flutter app like we do using external intent in Android?
Or their is any flutter-plugin for it?<issue_comment>username_1: You can just use the [`url_launcher`](https://pub.dartlang.org/packages/url_launcher) plugin to open maps. It launches map if installed or falls back to open the map on a browser.
Example:
```
import 'package:flutter/material.dart';
import 'package:url_launcher/url_launcher.dart';
void main() {
runApp(new Scaffold(
body: new Center(
child: new RaisedButton(
onPressed: _launchURL,
child: new Text('Launch Maps'),
),
),
));
}
_launchMaps() async {
const url = "https://www.google.com/maps/search/?api=1&query=LATITUDE,LONGITUDE,17&query_place_id=PLACE_ID";
if (await canLaunch(url)) {
await launch(url);
} else {
throw 'Could not launch Maps';
}
}
```
Hope that helps!
Upvotes: 2 <issue_comment>username_2: I suggest you to use [url\_launcher](https://pub.dartlang.org/packages/url_launcher) dart package.
In this way you can use all url schemas to open (`phone`, `sms`, and even `maps` as in your case).
In order to open Google Maps either in Android and iOS you could use the [general Android Maps URI schema](https://developers.google.com/maps/documentation/urls/guide) as suggested by username_1.
```here
_openMap() async {
const url = 'https://www.google.com/maps/search/?api=1&query=52.32,4.917';
if (await canLaunch(url)) {
await launch(url);
} else {
throw 'Could not launch $url';
}
}
```
If you want to give a choice on Android you could use the general [`geo:` URI schema](https://en.wikipedia.org/wiki/Geo_URI_scheme).
If you want specifically open iOS Maps API you can use [Cupertino Maps URI schema](https://developer.apple.com/library/content/featuredarticles/iPhoneURLScheme_Reference/MapLinks/MapLinks.html#//apple_ref/doc/uid/TP40007899-CH5-SW1).
If you choose to distinguish between Android and iOS (not using Google Maps Api schema for all platform) you have to do it also in your open map call in a way like this:
```here
_openMap() async {
// Android
const url = 'geo:52.32,4.917';
if (await canLaunch(url)) {
await launch(url);
} else {
// iOS
const url = 'http://maps.apple.com/?ll=52.32,4.917';
if (await canLaunch(url)) {
await launch(url);
} else {
throw 'Could not launch $url';
}
}
}
```
Or you can check the OS at runtime with [`dart.io` library `Platform` class](https://api.dartlang.org/stable/1.24.3/dart-io/Platform-class.html):
```here
import 'dart:io';
_openMap() async {
// Android
var url = 'geo:52.32,4.917';
if (Platform.isIOS) {
// iOS
url = 'http://maps.apple.com/?ll=52.32,4.917';
}
if (await canLaunch(url)) {
await launch(url);
} else {
throw 'Could not launch $url';
}
}
```
Now that I finished hosekeeping (the real one... not some code refactoring... ^^') I can finish my answer.
As I tell you at the beginning with url\_launcher you can use all URI Schemas in order to call, send sms, send e-mail etc.
Here some code to do that:
```here
_sendMail() async {
// Android and iOS
const uri = 'mailto:test@example.org?subject=Greetings&body=Hello%20World';
if (await canLaunch(uri)) {
await launch(uri);
} else {
throw 'Could not launch $uri';
}
}
_callMe() async {
// Android
const uri = 'tel:+1 222 060 888';
if (await canLaunch(uri)) {
await launch(uri);
} else {
// iOS
const uri = 'tel:001-22-060-888';
if (await canLaunch(uri)) {
await launch(uri);
} else {
throw 'Could not launch $uri';
}
}
}
_textMe() async {
// Android
const uri = 'sms:+39 349 060 888';
if (await canLaunch(uri)) {
await launch(uri);
} else {
// iOS
const uri = 'sms:0039-222-060-888';
if (await canLaunch(uri)) {
await launch(uri);
} else {
throw 'Could not launch $uri';
}
}
}
```
Even if [URI](https://it.wikipedia.org/wiki/Uniform_Resource_Identifier) schema should be standards (RFC) sometimes the `authority` and `path` parts of them could differ between frameworks (Android or iOS).
So here I manage the different OSes with exception but, you could do it better with [`dart.io` library `Platform` class](https://api.dartlang.org/stable/1.24.3/dart-io/Platform-class.html):
```here
import 'dart:io'
```
and then in code:
```here
if (Platform.isAndroid) {
} else if (Platform.isIOS) {
}
```
I suggest you always test them in both environments.
You can check the Android and iOS schema documenation here:
* [Android](https://developer.android.com/guide/components/intents-common.html)
* [iOS](https://developer.apple.com/library/content/featuredarticles/iPhoneURLScheme_Reference/Introduction/Introduction.html#//apple_ref/doc/uid/TP40007899-CH1-SW1)
If you wanna something similar to startActivity in Android (but that works only for Android platform) you can use the dart package [android\_intent](https://pub.dartlang.org/packages/android_intent).
Upvotes: 7 [selected_answer]<issue_comment>username_3: For iOS use, no browser involved, directly to apps:
```
//waze
canLaunch("waze://")
launch("waze://?ll=${latitude},${longitude}&navigate=yes");
//gmaps
canLaunch("comgooglemaps://")
launch("comgooglemaps://?saddr=${latitude},${longitude}&directionsmode=driving")
```
What you need to do is to add to Info.plist:
```
LSApplicationQueriesSchemes
comgooglemaps
waze
```
Upvotes: 4 <issue_comment>username_4: In that case you just need to use [url\_launcher](https://pub.dev/packages/url_launcher) plugin which opens your app.
```
yourMap() async {
const url = "https://www.google.com/maps/search/?
api=1&query=LATITUDE,LONGITUDE,17&query_place_id=PLACE_ID";
if (await canLaunch(url)) {
await launch(url);
} else {
throw 'Could not launch Maps';
}
}
```
and then call this yourMap() method on your button onPress.
Upvotes: 0 <issue_comment>username_5: I think you are looking for a scenario where you want to open device installed app from Flutter?
If so , you can use a package called device\_apps.
This FLutter package also shows you icon of app to display and you can open app by package name as well.
Just browse all methods it offers. I am using it for my Flutter FItness app to launch installed music player.
<https://pub.dev/packages/device_apps>
Upvotes: 0 <issue_comment>username_6: **The url\_launcher plugin allows your Flutter application to perform actions like opening a web page in Safari or deep-linking into another application with context.**
**Adding the url\_launcher Plugin**
```
dependencies:
flutter:
sdk: flutter
url_launcher: ^6.0.3
import 'dart:async';
import 'package:flutter/material.dart';
import 'package:url_launcher/url_launcher.dart';
void main() {
runApp(MyApp());
}
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'URL Launcher',
theme: ThemeData(
primarySwatch: Colors.blue,
),
home: MyHomePage(title: 'URL Launcher'),
);
}
}
class MyHomePage extends StatefulWidget {
MyHomePage({Key? key, required this.title}) : super(key: key);
final String title;
@override
_MyHomePageState createState() => _MyHomePageState();
}
class _MyHomePageState extends State {
Future? \_launched;
String \_phone = '';
Future \_launchInBrowser(String url) async {
if (await canLaunch(url)) {
await launch(
url,
forceSafariVC: false,
forceWebView: false,
headers: {'my\_header\_key': 'my\_header\_value'},
);
} else {
throw 'Could not launch $url';
}
}
Future \_launchInWebViewOrVC(String url) async {
if (await canLaunch(url)) {
await launch(
url,
forceSafariVC: true,
forceWebView: true,
headers: {'my\_header\_key': 'my\_header\_value'},
);
} else {
throw 'Could not launch $url';
}
}
Future \_launchInWebViewWithJavaScript(String url) async {
if (await canLaunch(url)) {
await launch(
url,
forceSafariVC: true,
forceWebView: true,
enableJavaScript: true,
);
} else {
throw 'Could not launch $url';
}
}
Future \_launchInWebViewWithDomStorage(String url) async {
if (await canLaunch(url)) {
await launch(
url,
forceSafariVC: true,
forceWebView: true,
enableDomStorage: true,
);
} else {
throw 'Could not launch $url';
}
}
Future \_launchUniversalLinkIos(String url) async {
if (await canLaunch(url)) {
final bool nativeAppLaunchSucceeded = await launch(
url,
forceSafariVC: false,
universalLinksOnly: true,
);
if (!nativeAppLaunchSucceeded) {
await launch(
url,
forceSafariVC: true,
);
}
}
}
Widget \_launchStatus(BuildContext context, AsyncSnapshot snapshot) {
if (snapshot.hasError) {
return Text('Error: ${snapshot.error}');
} else {
return const Text('');
}
}
Future \_makePhoneCall(String url) async {
if (await canLaunch(url)) {
await launch(url);
} else {
throw 'Could not launch $url';
}
}
@override
Widget build(BuildContext context) {
const String toLaunch = 'https:'https://www.cylog.org/headers/';
return Scaffold(
appBar: AppBar(
title: Text(widget.title),
),
body: ListView(
children: [
Column(
mainAxisAlignment: MainAxisAlignment.center,
children: [
const Padding(
padding: EdgeInsets.all(16.0),
child: Text('Launch URL : $toLaunch'),
),
RaisedButton(
onPressed: () => setState(() {
\_launched = \_launchInBrowser(toLaunch);
}),
child: const Text('Launch in browser'),
),
const Padding(padding: EdgeInsets.all(16.0)),
RaisedButton(
onPressed: () => setState(() {
\_launched = \_launchInWebViewOrVC(toLaunch);
}),
child: const Text('Launch in app'),
),
RaisedButton(
onPressed: () => setState(() {
\_launched = \_launchInWebViewWithJavaScript(toLaunch);
}),
child: const Text('Launch in app(JavaScript ON)'),
),
RaisedButton(
onPressed: () => setState(() {
\_launched = \_launchInWebViewWithDomStorage(toLaunch);
}),
child: const Text('Launch in app(DOM storage ON)'),
),
const Padding(padding: EdgeInsets.all(16.0)),
RaisedButton(
onPressed: () => setState(() {
\_launched = \_launchUniversalLinkIos(toLaunch);
}),
child: const Text(
'Launch a universal link in a native app, fallback to Safari.(Youtube)'),
),
const Padding(padding: EdgeInsets.all(16.0)),
RaisedButton(
onPressed: () => setState(() {
\_launched = \_launchInWebViewOrVC(toLaunch);
Timer(const Duration(seconds: 5), () {
print('Closing WebView after 5 seconds...');
closeWebView();
});
}),
child: const Text('Launch in app + close after 5 seconds'),
),
const Padding(padding: EdgeInsets.all(16.0)),
FutureBuilder(future: \_launched, builder: \_launchStatus),
],
),
],
),
);
}
}
```
Upvotes: 0
|
2018/03/18
| 1,420 | 4,593 |
<issue_start>username_0: ```
//loping for objects
const users = [
{isPremium: false},
{isPremium: false},
{isPremium: false},
{isPremium: false},
{isPremium: false},
]
function setUsersToPremium(users) {
// users is an array of user objects.
// each user object has the property 'isPremium'
// set each user's isPremium property to true
// return the users array
for (let key in users) {
users['isPremium'] = true;
}
return users;
}
setUsersToPremium(users);
```
I am true to figure out how to loop through an array of objects and change their value from false to true. My result with this code is
```
[true, true, true, true, true ]
```
but what I want to do is change each
```
[isPremium: true]
```
I'm wondering what I'm doing that is keeping me from accessing this value.<issue_comment>username_1: You have to set as `users[key].isPremium = true;`
```js
//loping for objects
const users = [
{isPremium: false},
{isPremium: false},
{isPremium: false},
{isPremium: false},
{isPremium: false},
]
function setUsersToPremium(users) {
for (let key in users) {
users[key].isPremium = true;
}
//return users; <-- No need to return. You are changing the user variable directly with these code.
}
setUsersToPremium(users);
console.log(users);
```
---
The **ideal** way to do this is to use `forEach` instead of `for/in`
```js
const users = [{isPremium: false},{isPremium: false},{isPremium: false},{isPremium: false},{isPremium: false},];
function setUsersToPremium(users) {
users.forEach(o => o.isPremium = true);
}
setUsersToPremium(users);
console.log(users);
```
If you don't want to affect the original array, you can clone the object using `Object.assign`
```js
const users = [{isPremium: false},{isPremium: false},{isPremium: false},{isPremium: false},{isPremium: false},];
function setUsersToPremium(users) {
return users.map(o => {
let x = Object.assign({}, o);
x.isPremium = true;
return x;
});
}
var newUsers = setUsersToPremium(users);
console.log(newUsers);
```
Upvotes: 2 <issue_comment>username_2: You are using `for..in` loop for an array so you'll be getting indexes of that `array (1,2,3...)`
To fix that you can use `for..of` loop
```
function setUsersToPremium(users) {
for (let user of users) {
user['isPremium'] = true;
}
return users;
}
```
Upvotes: 2 <issue_comment>username_3: ```
const users = [{
isPremium: false
}, {
isPremium: false
}, {
isPremium: false
}, {
isPremium: false
}, {
isPremium: false
}, ]
function setUsersToPremium(users) {
// users is an array of user objects.
// each user object has the property 'isPremium'
// set each user's isPremium property to true
// return the users array
for (let key of users) {
key.isPremium = true;
}
return users;
}
setUsersToPremium(users);
```
Upvotes: 0 <issue_comment>username_4: You can use `.map` which creates a new array with your desired result.
```js
const users = [{
isPremium: false
},
{
isPremium: false
},
{
isPremium: false
},
{
isPremium: false
},
{
isPremium: false
},
]
const result = users.map(o => ({ isPremium: true}))
console.log(result)
```
Upvotes: 1 <issue_comment>username_5: Here is also a simple way without using any loops. Since, you want to change the value of all the `isPremium` property of the objects, you can stringify the JSON array and then replace the `false` value with `true` then parse it back to an array. As simple as that:
```js
const users = [
{'isPremium': false},
{'isPremium': false},
{'isPremium': false},
{'isPremium': false},
{'isPremium': false},
];
var res = JSON.parse(JSON.stringify(users).replace(/false/g,'true'));
console.log(res);
```
Upvotes: 0 <issue_comment>username_6: ```js
const users = [
{isPremium: false},
{isPremium: false},
{isPremium: false},
{isPremium: false},
{isPremium: false}
];
/** setUsersToPremium()
* param: users, an array of objects
* sets each user's isPremium property to true
* returns users array
**/
function setUsersToPremium() {
for (let i = 0, max = users.length; i < max; i++) {
users[i].isPremium = true;
}
}
setUsersToPremium();
console.log(users);
```
A couple of notes:
* Unnecessary to pass the users array of objects to function nor does
the function need to return this array to effect desired changes.
* Using a simple for-loop may seem old-fashioned but it still works well.
Upvotes: 0
|
2018/03/18
| 1,949 | 7,128 |
<issue_start>username_0: I have been following a tutorial on how to build a full stack MEAN application and everything has been functioning fine up until this point (registration, login and authentication.)
I'm now trying to do a post request for the blog page and am receiving the following error:
```
```
Cannot POST /blogs/newBlog
```
```
All that I've done so far is create a schema, a route and then made the relevant changes to index.js. The schema file provided below is the one provided by the author of the tutorial in his respository (unlike the other two files it is in it's completed form.) The problem still persists and so I do not believe it to be the problem.
**Blog schema:**
```
/* ===================
Import Node Modules
=================== */
const mongoose = require('mongoose'); // Node Tool for MongoDB
mongoose.Promise = global.Promise; // Configure Mongoose Promises
const Schema = mongoose.Schema; // Import Schema from Mongoose
// Validate Function to check blog title length
let titleLengthChecker = (title) => {
// Check if blog title exists
if (!title) {
return false; // Return error
} else {
// Check the length of title
if (title.length < 5 || title.length > 50) {
return false; // Return error if not within proper length
} else {
return true; // Return as valid title
}
}
};
// Validate Function to check if valid title format
let alphaNumericTitleChecker = (title) => {
// Check if title exists
if (!title) {
return false; // Return error
} else {
// Regular expression to test for a valid title
const regExp = new RegExp(/^[a-zA-Z0-9 ]+$/);
return regExp.test(title); // Return regular expression test results (true or false)
}
};
// Array of Title Validators
const titleValidators = [
// First Title Validator
{
validator: titleLengthChecker,
message: 'Title must be more than 5 characters but no more than 50'
},
// Second Title Validator
{
validator: alphaNumericTitleChecker,
message: 'Title must be alphanumeric'
}
];
// Validate Function to check body length
let bodyLengthChecker = (body) => {
// Check if body exists
if (!body) {
return false; // Return error
} else {
// Check length of body
if (body.length < 5 || body.length > 500) {
return false; // Return error if does not meet length requirement
} else {
return true; // Return as valid body
}
}
};
// Array of Body validators
const bodyValidators = [
// First Body validator
{
validator: bodyLengthChecker,
message: 'Body must be more than 5 characters but no more than 500.'
}
];
// Validate Function to check comment length
let commentLengthChecker = (comment) => {
// Check if comment exists
if (!comment[0]) {
return false; // Return error
} else {
// Check comment length
if (comment[0].length < 1 || comment[0].length > 200) {
return false; // Return error if comment length requirement is not met
} else {
return true; // Return comment as valid
}
}
};
// Array of Comment validators
const commentValidators = [
// First comment validator
{
validator: commentLengthChecker,
message: 'Comments may not exceed 200 characters.'
}
];
// Blog Model Definition
const blogSchema = new Schema({
title: { type: String, required: true, validate: titleValidators },
body: { type: String, required: true, validate: bodyValidators },
createdBy: { type: String },
createdAt: { type: Date, default: Date.now() },
likes: { type: Number, default: 0 },
likedBy: { type: Array },
dislikes: { type: Number, default: 0 },
dislikedBy: { type: Array },
comments: [{
comment: { type: String, validate: commentValidators },
commentator: { type: String }
}]
});
// Export Module/Schema
module.exports = mongoose.model('Blog', blogSchema);
```
**routes/blogs.js**
```
const User = require('../models/user'); // Import User Model Schema
const jwt = require('jsonwebtoken');
const config = require('../config/database');
module.exports = (router) => {
router.post('/newBlog', (req, res) => { // TODO: change URL
res.send('test worked');
});
return router; // Return router object to main index.js
}
```
**index.js**
```
/* ===================
Import Node Modules
=================== */
const env = require('./env');
const express = require('express');
const app = express();
const router = express.Router();
const mongoose = require('mongoose');
mongoose.Promise = global.Promise;
const config = require('./config/database');
const path = require('path');
const authentication = require('./routes/authentication')(router);
const blogs = require('./routes/blogs')(router);
const bodyParser = require('body-parser');
const cors = require('cors');
const port = process.env.PORT || 8080;
// Database Connection
mongoose.connect(config.uri, {
useMongoClient: true,
}, (err) => {
// Check if database was able to connect
if (err) {
console.log('Could NOT connect to database: ', err);
message
} else {
console.log('Connected to ' + config.db);
}
});
// Middleware
app.use(cors({ origin: 'http://localhost:4200' }));
app.use(bodyParser.urlencoded({ extended: false }));
app.use(bodyParser.json());
app.use(express.static(__dirname + '/public'));
app.use('/authentication', authentication);
app.use('/blogs', blogs);
app.get('*', (req, res) => {
res.sendFile(path.join(__dirname + '/public/index.html'));
});
// Start Server: Listen on port 8080
app.listen(port, () => {
console.log('Listening on port ' + port + ' in ' + process.env.NODE_ENV + ' mode');
});
```
I have been enjoying this course greatly and would appreciate any help (even if it is to simply rule out possible causes.)
**Error in full:**
```
Error
```
Cannot POST /blogs/newBlog
```
```<issue_comment>username_1: Your problem has to do with:
```
app.use('/blogs', blogs);
```
The `blogs` function should be a function taking `(req, res)` but it actually takes `(router)`.
You have two options:
Create a `router` and pass into `blogs`, e.g. `app.use('/blogs', blogs(router));` or
Add `app.post()`, e.g.
```
app.post('/blogs/newBlog', (req, res) => {
res.send('test worked');
});
```
Upvotes: 0 <issue_comment>username_2: Just try to change from bodyparser to express means...
instead of
```
app.use(bodyParser.json())
```
use `app.use(express.json())`
Upvotes: -1 <issue_comment>username_3: Please replace this:
```
router.post('/newBlog', (req, res) => {
res.send('test worked');
});
```
With this (on all your methods "get, post, use" etc):
```
// Make sure to always add a slash both before and after the endpoint!
router.post('/newBlog/', (req, res) => {
res.send('test worked');
});
```
I encounter this issue mostly if I don't add endpoint slashes properly.
Upvotes: -1 <issue_comment>username_4: ### Check spelling of your route
In my case I was posting from a url that had a missing letter.
I was doing `POST /webooks/` but I needed to do `POST /webhooks` (note the missing `h` letter).
So it may also be that you misspell your routes.
Upvotes: 0
|
2018/03/18
| 1,470 | 5,764 |
<issue_start>username_0: I have following:
```
it('invalid use', () => {
Matcher(1).case(1, () => {});
});
```
The `case` method is supposed to throw after some delay, how can I describe it for Mocha/Chai that's what I want - the test should pass (and must not pass when exception is not thrown)?
**Consider `case` method off limits, it cannot be changed.**
For testing purposes it should be equivalent to:
```
it('setTimeout throw', _ => {
setTimeout(() => { throw new Error(); }, 1); // this is given, cannot be modified
});
```
I tried:
```
it('invalid use', done => {
Matcher(1).case(1, () => {});
// calls done callback after 'case' may throw
setTimeout(() => done(), MatcherConfig.execCheckTimeout + 10);
});
```
But that's not really helping me, because the test behavior is exactly reverted - when an exception from `case` (`setTimeout`) is not thrown, it passes (should fail) and when an exception is thrown the test fails (should succeed).
I read somewhere someone mentioning global error handler, but I would like to solve this cleanly using Mocha and/or Chai, if it is possible (I guess Mocha is already using it in some way).<issue_comment>username_1: From [Chai documentation](http://www.chaijs.com/api/bdd/#method_throw) :
>
> When no arguments are provided, .throw invokes the target function and asserts that an error is thrown.
>
>
>
So you could something like
```
expect(Matcher(1).case(1, () => {})).to.throw
```
Upvotes: -1 <issue_comment>username_2: You cannot handle exceptions from within a asynchronous callback, e.g. see [Handle error from setTimeout](https://stackoverflow.com/q/41431605/603003). This has to do with the execution model ECMAScript uses. I suppose the only way to catch it is in fact to employ some environment-specific global error handling, e.g. `process.on('uncaughtException', ...)` in Node.js.
If you convert your function to Promises, however, you can easily test it using the Chai plugin [chai-as-promsied](https://www.npmjs.com/package/chai-as-promised):
```
import * as chai from 'chai';
import chaiAsPromised = require('chai-as-promised');
chai.use(chaiAsPromised);
const expect = chai.expect;
it('invalid use', async () => {
await expect(Matcher(1).case(1, () => {})).to.eventually.be.rejected;
});
```
Upvotes: 1 <issue_comment>username_3: Any Mocha statements like `before`, `after` or `it` will work asynchronously if you return a promise. I generally use something like the below for async tests.
Also don't forget to set timeout `this.timeout(...)` if you expect the async function to take more than 2 seconds.
```
it('some test', () => {
return new Promise(function(resolve,reject){
SomeAsyncFunction(function(error,vals) {
if(error) {
return reject(error);
} else {
try {
//do some chai tests here
} catch(e) {
return reject(e);
}
return resolve();
}
});
});
});
```
Specifically for your case, since we expect some error to be thrown after a period of time (assuming the empty callback you have provided to `.case` should not run due to the exception being thrown) then you can write it something like:
```
it('invalid use', () => {
//define the promise to run the async function
let prom = new Promise(function(resolve,reject){
//reject the promise if the function does not throw an error
//note I am assuming that the callback won't run if the error is thrown
//also note this error will be passed to prom.catch so need to do some test to make sure it's not the error you are looking for.
Matcher(1).case(1, () => {return reject(new Error('did not throw'))});
});
prom.catch(function(err){
try {
expect(err).to.be.an('error');
expect(err.message).to.not.equal('did not throw');
//more checks to see if err is the error you are looking for
} catch(e) {
//err was not the error you were looking for
return Promise.reject(e);
}
//tests passed
return Promise.resolve();
});
//since it() receives a promise as a return value it will pass or fail the test based on the promise.
return prom;
});
```
Upvotes: 0 <issue_comment>username_4: If your tested code calls `setTimeout` with a callback that throws and no-one is catching this is exception then:
1) this code is broken
2) the only way to see that problem is platform global exception handler like `process.on('uncaughtException'` mentioned by user username_2
The last resort chance is to stub `setTimeout` for duration of test (for example using `sinon.stub`) or just manually.
In such stubbed `setTimeout` you can decorate timeout handler, detect exception and call appropriate asserts.
NOTE, this is last resort solution - your app code is broken and should be fixed to properly propagate errors, not only for testing but ... well, to be good code.
Pseudocode example:
```js
it('test', (done) => {
const originalSetTimeout = setTimeout;
setTimeout = (callback, timeout) => {
originalSetTimeout(() => {
try {
callback();
} catch(error) {
// CONGRATS, you've intercepted exception
// in _SOME_ setTimeout handler
}
}, timeout)
}
yourTestCodeThatTriggersErrorInSomeSetTimeoutCallback(done);
})
```
NOTE2: I intentionally didn't wrote proper async cleanup code, it's a homework. Again, see `sinon.js` and its `sandbox`
NOTE3: It will catch all `setTimeout` calls during test duration. Beware, there are dragons.
Upvotes: -1
|
2018/03/18
| 180 | 709 |
<issue_start>username_0: I have made my first commit in github last year and I want to commit another one. Should I replace that first commit or add a new commit. How to do it ?<issue_comment>username_1: Most people keep the first commit in GitHub. It's no big deal to change it, just a bit strange.
The idea is the same as with any commit: override and push force.
```
git commit --amend (change message in the editor or whatever)
git push --force (will override GitHub history)
```
Upvotes: -1 <issue_comment>username_2: If you are a beginner, keep the history as it is and add a new commit. This is the normal way to use git.
And (really) take some time to read some basic tutorials on git
Upvotes: 2
|
2018/03/18
| 480 | 1,774 |
<issue_start>username_0: I was wondering how can I display the form errors under each field. Considering I have the log in form, if there is a problem with username, then the specific errors should be displayed under, otherwise if there is a problem with the password, the error should be displayed under the password field.
I have been searching for a proper way to do it, this is what I came with:
```html
{% csrf\_token %}
#### Log In
{{ form.username }}
{{ form.password }}
Log In
{% if form.errors %}
{% for field in form %}
{% for error in field.errors %}
**{{ error|escape }}**
{% endfor %}
{% endfor %}
{% for error in form.non\_field\_errors %}
**{{ error|escape }}**
{% endfor %}
{% endif %}
```<issue_comment>username_1: Try this :)
```html
{% csrf\_token %}
#### Log In
{{ form.username }}
{{ form.username.errors }}
{{ form.password }}
{{ form.password.errors }}
Log In
```
Upvotes: 0 <issue_comment>username_2: It's good practice to get into the habit of looping over forms so that you can write generic templates for all forms. So I'd so something like;
```
{% csrf\_token %}
#### Log In
{% if form.non\_field\_errors %}
{% for error in form.non\_field\_errors %}
**{{ error|escape }}**
{% endfor %}
{% endif %}
{% for hidden in form.hidden\_fields %}
{{ hidden }}
{% endfor %}
{% for field in form.visible\_fields %}
{{ field.label\_tag }}
{{ field }}
{% if field.help\_text %}
{{ field.help\_text }}
{% endif %}
{% if field.errors %}
{{ field.errors }} {# outputs a UL with class 'errorlist' #}
{% endif %}
{% endfor %}
Log In
```
You could then include that form in a template & include it with whatever submit/cancel buttons you wanted.
Upvotes: 2 [selected_answer]
|
2018/03/18
| 515 | 2,308 |
<issue_start>username_0: I've built a fairly complex graphical user interface for a data analysis pipeline that a neuroscience lab is using. I built it with Python in a Jupyter Notebook using `ipywidgets` and various interactive plotting libraries such as bokeh. It's basically just a GUI for an existing Python analysis package, but many researchers don't have any or sufficient programming skills to use it and hence need a GUI.
The problem is that it's a fairly involved setup process. You have to install anaconda, install a bunch of libraries, launch a Jupyter notebook server, etc. This installation process is not feasible for people with minimal tech skills.
How can I package and deliver my Jupyter Notebook app as close to a "download and double-click the installer" type of setup as possible? It needs to be easy for non-tech people. Does the new JupyterLab offer anything here? Could I package it as an Electron app some how?<issue_comment>username_1: Have you tried [conda constructor](https://github.com/conda/constructor)?
* It creates a double click + follow steps installer, which installs python, conda and specified conda packages.
* Installing Jupyter this way also creates a start menu entry in windows to start the Jupyter server.
* It also allows you to specify pre- and post-install batch scripts, that you can use for extra configuration.
* It can create linux and osx installers as well.
For distribution and updates of apps (.ipynb files), I once used the startup scripts of the Jupyter server to check for newer versions in a github repo and pull the new versions of the files if there were any.
Also, for a friendlier user experience inside Jupyter, check [appmode](https://github.com/oschuett/appmode).
Upvotes: 4 [selected_answer]<issue_comment>username_2: You might be able to use pyinstaller for it. If you can start your program by calling a simple python script.
```
pip install pyinstaller
pyinstaller --onefile your_script.py
```
If you execute this in a windows environment a windows exe file is created which contains all the dependencies. I am not sure what happens on a linux system. The exe file may become very large though.
You might run into problems if the script needs any temporary files etc. I am trying to figure that part out myself.
Upvotes: 1
|
2018/03/18
| 533 | 1,736 |
<issue_start>username_0: ```
public class Example1 {
public static void main(String[] args) {
System.out.println("h\tw");
System.out.println("h w");
System.out.println("hello\tworld");
System.out.println("hello world");
}
}
```
this gave me the output below
[output](https://i.stack.imgur.com/DMll0.png)
In the first one \t = 7 spaces and in the second time \t= 3spaces
what is the reason for that?<issue_comment>username_1: The tab works like the multiplication of 4 or 8 depending on the `console` you are using. You first console print contains 1 char `h` before a tab. So, there will be `7` space in between h & w. In the second case `hello` is `5` char long so there will be `3` space between hello and world.
Note: you mentioned there is 6 space between `h and w` but I believe it will be 7 space.
For the below example:
```
System.out.println("helloworld\tworld"); // line 1
System.out.println("helloworld world");
```
Line 1 will generate `helloworld world` as output. Now, `helloworld` has `10` char. I am assuming your console working on 8 multiplication principal. So, the lowest multiplication which is greater than `10` is `8*2 =16.` So there will be `16-10 = 6` space.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Tab actually add a variable length but it has a maximum length. Just imagine tab-stops every 8 characters (just vertical lines where to stop is you use a tab)
```
|_ _ _ _ _ _ _ _|_ _ _ _ _ _ _ _|
```
If you use this strings "Hello\tWorld" it will move to the next tab stop after Hello.
```
|H E L L O _ _ _|W O R L D _ _ _|
```
Same applied to shorter length string like 'h\tw'
```
|H _ _ _ _ _ _ _|W _ _ _ _ _ _ _|
```
Upvotes: 1
|
2018/03/18
| 1,477 | 3,080 |
<issue_start>username_0: I have a dataset in tibble in R like the one below:
```
# A tibble: 50,045 x 5
ref_key start_date end_date
1 123 2010-01-08 2010-01-13
2 123 2010-01-21 2010-01-23
3 123 2010-03-10 2010-04-14
```
I need to create another tibble that each row only store one date, like the one below:
```
ref_key date
1 123 2010-01-08
2 123 2010-01-09
3 123 2010-01-10
4 123 2010-01-11
5 123 2010-01-12
6 123 2010-01-13
7 123 2010-01-21
8 123 2010-01-22
9 123 2010-01-23
```
Currently I am writing an explicit loop for that like below:
```
for (loop in (1:nrow(input.df))) {
if (loop%%100==0) {
print(paste(loop,'/',nrow(input.df)))
}
temp.df.st00 <- input.df[loop,] %>% data.frame
temp.df.st01 <- tibble(ref_key=temp.df.st00[,'ref_key'],
date=seq(temp.df.st00[,'start_date'],
temp.df.st00[,'end_date'],1))
if (loop==1) {
output.df <- temp.df.st01
} else {
output.df <- output.df %>%
bind_rows(temp.df.st01)
}
}
```
It is working, but in a slow way, given that I have >50k rows to process, it takes a few minutes to finish the loop.
I wonder if this step can be vectorized, is it something related to `row_wise` in `dplyr`?<issue_comment>username_1: We create a row name column (`rownames_to_column`), then `nest` the 'rn' and 'ref\_key', `mutate` by taking the sequence of 'start\_date' and 'end\_date' within `map` and `unnest` after `select`ing out the unwanted columns
```
library(tidyverse)
res <- df1 %>%
rownames_to_column('rn') %>%
nest(-rn, -ref_key) %>%
mutate(date = map(data, ~ seq(.x$start_date, .x$end_date, by = "1 day"))) %>%
select(-data, -rn) %>%
unnest
head(res, 9)
# ref_key date
#1 123 2010-01-08
#2 123 2010-01-09
#3 123 2010-01-10
#4 123 2010-01-11
#5 123 2010-01-12
#6 123 2010-01-13
#7 123 2010-01-21
#8 123 2010-01-22
#9 123 2010-01-23
```
Upvotes: 2 <issue_comment>username_2: One solution is to use `tidyr::complete` to expand rows. Since row expansion is based on `start-date` and `end_date` of a row, hence `group_by` on `row_number` will help to generate sequence of `Date` between `start-date` and `end_date`.
```
library(dplyr)
library(tidyr)
df %>% #mutate(rnum = row_number()) %>%
group_by(row_number()) %>%
complete(start_date = seq.Date(max(start_date), max(end_date), by="day")) %>%
fill(ref_key) %>%
ungroup() %>%
select(ref_key, date = start_date)
# # A tibble: 45 x 2
# ref_key date
#
# 1 123 2010-01-08
# 2 123 2010-01-09
# 3 123 2010-01-10
# 4 123 2010-01-11
# 5 123 2010-01-12
# 6 123 2010-01-13
# 7 123 2010-01-21
# 8 123 2010-01-22
# 9 123 2010-01-23
# 10 123 2010-03-10
# # ... with 35 more rows
```
*Data*
```
df <- read.table(text = "ref_key start_date end_date
123 2010-01-08 2010-01-13
123 2010-01-21 2010-01-23
123 2010-03-10 2010-04-14", header = TRUE, stringsAsFactor = FALSE)
df$start_date <- as.Date(df$start_date)
df$end_date <- as.Date(df$end_date)
```
Upvotes: 1
|
2018/03/18
| 1,556 | 3,707 |
<issue_start>username_0: I have one error in gradle file for compile 'com.android.support:appcompat-v7:27.0.2'.
gradle version is 2.3.0
I searched a lot ,but could not solve this problem.
my error is:
>
> All com.android.support libraries must use the exact same version specification (mixing versions can lead to runtime crashes). Found versions 27.0.2, 25.1.0, 25.0.0. Examples include com.android.support:animated-vector-drawable:27.0.2 and com.android.support:design:25.1.0
>
>
>
`app/build.gradle`:
```
android {
compileSdkVersion 27
buildToolsVersion "27.0.2"
defaultConfig {
applicationId "com.example.pegah_system.sanduqchehproject"
minSdkVersion 15
targetSdkVersion 27
versionCode 1
versionName "1.0"
testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner"
vectorDrawables.useSupportLibrary = true
}
}
dependencies {
compile fileTree(dir: 'libs', include: ['*.jar'])
androidTestCompile('com.android.support.test.espresso:espresso-core:2.2.2', {
exclude group: 'com.android.support', module: 'support-annotations'
})
compile('com.squareup.retrofit2:retrofit:2.0.0-beta4') {
exclude module: 'okhttp'
}
compile 'com.github.bumptech.glide:glide:3.8.0'
compile 'com.squareup.okhttp3:logging-interceptor:3.8.0'
compile 'com.github.ybq:Endless-RecyclerView:1.0.3'
compile 'com.github.ybq:Android-SpinKit:1.1.0'
compile 'com.github.qdxxxx:BezierViewPager:v1.0.5'
compile 'com.android.support:appcompat-v7:27.0.2'
compile 'com.android.support.constraint:constraint-layout:1.0.2'
compile 'com.roughike:bottom-bar:2.1.2'
compile 'com.squareup.okhttp3:okhttp:3.0.0'
compile 'com.squareup.retrofit2:converter-gson:2.0.0-beta4'
compile 'com.android.support:recyclerview-v7:27.0.2'
compile 'com.squareup.picasso:picasso:2.5.2'
compile 'com.android.support:support-v4:27.0.2'
testCompile 'junit:junit:4.12'
}
```<issue_comment>username_1: We create a row name column (`rownames_to_column`), then `nest` the 'rn' and 'ref\_key', `mutate` by taking the sequence of 'start\_date' and 'end\_date' within `map` and `unnest` after `select`ing out the unwanted columns
```
library(tidyverse)
res <- df1 %>%
rownames_to_column('rn') %>%
nest(-rn, -ref_key) %>%
mutate(date = map(data, ~ seq(.x$start_date, .x$end_date, by = "1 day"))) %>%
select(-data, -rn) %>%
unnest
head(res, 9)
# ref_key date
#1 123 2010-01-08
#2 123 2010-01-09
#3 123 2010-01-10
#4 123 2010-01-11
#5 123 2010-01-12
#6 123 2010-01-13
#7 123 2010-01-21
#8 123 2010-01-22
#9 123 2010-01-23
```
Upvotes: 2 <issue_comment>username_2: One solution is to use `tidyr::complete` to expand rows. Since row expansion is based on `start-date` and `end_date` of a row, hence `group_by` on `row_number` will help to generate sequence of `Date` between `start-date` and `end_date`.
```
library(dplyr)
library(tidyr)
df %>% #mutate(rnum = row_number()) %>%
group_by(row_number()) %>%
complete(start_date = seq.Date(max(start_date), max(end_date), by="day")) %>%
fill(ref_key) %>%
ungroup() %>%
select(ref_key, date = start_date)
# # A tibble: 45 x 2
# ref_key date
#
# 1 123 2010-01-08
# 2 123 2010-01-09
# 3 123 2010-01-10
# 4 123 2010-01-11
# 5 123 2010-01-12
# 6 123 2010-01-13
# 7 123 2010-01-21
# 8 123 2010-01-22
# 9 123 2010-01-23
# 10 123 2010-03-10
# # ... with 35 more rows
```
*Data*
```
df <- read.table(text = "ref_key start_date end_date
123 2010-01-08 2010-01-13
123 2010-01-21 2010-01-23
123 2010-03-10 2010-04-14", header = TRUE, stringsAsFactor = FALSE)
df$start_date <- as.Date(df$start_date)
df$end_date <- as.Date(df$end_date)
```
Upvotes: 1
|
2018/03/18
| 2,378 | 9,602 |
<issue_start>username_0: I've tried to create a class to connect to a mongoDB (and get a gridFS connection using (`gridfs-stream`). But with that I do get two problems:
1. I do get sometimes the mongo Error `server instance in invalid state connected`
2. It is impossible for me to mock this class out - using jestJS
So I would be very thankful if someone can help me to optimize this class to get a really solid working class. For example I don't like the `let that = this` in the connect() function.
[Example repo](https://github.com/jaqua/mongodb-class)
**DB class**
```
const mongo = require('mongodb')
const Grid = require('gridfs-stream')
const { promisify } = require('util')
export default class Db {
constructor (uri, callback) {
this.db = null
this.gfs = null
const server = process.env.MONGO_SERVER || 'localhost'
const port = process.env.MONGO_PORT || 27017
const db = process.env.MONGO_DB || 'test'
// Is this the correct way to connect (using mongo native driver)?
this.connection = new mongo.Db(db, new mongo.Server(server, port))
this.connection.open = promisify(this.connection.open)
this.connected = false
return this
}
async connect (msg) {
let that = this
if (!this.db) {
try {
await that.connection.open()
that.gfs = Grid(that.connection, mongo)
this.connected = true
} catch (err) {
console.error('mongo connection error', err)
}
}
return this
}
isConnected () {
return this.connected
}
}
```
**Example**
This function will add a new user to the DB using the class above:
```
import bcrypt from 'bcrypt'
import Db from './lib/db'
const db = new Db()
export async function createUser (obj, { username, password }) {
if (!db.isConnected()) await db.connect()
const Users = db.connection.collection('users')
return Users.insert({
username,
password: bcrypt.hashSync(password, 10),
createdAt: new Date()
})
}
```
**Unit test**
I need to create a unit test to test if the mongoDB method is called. No integration test for testing the method.
So I need to mock the DB connection, collection and insert method.
```
import bcrypt from 'bcrypt'
import { createUser } from '../../user'
import Db from '../../lib/db'
const db = new Db()
jest.mock('bcrypt')
describe('createUser()', () => {
test('should call mongoDB insert()', async () => {
bcrypt.hashSync = jest.fn(() => SAMPLE.BCRYPT)
// create somekind of mock for the insert method...
db.usersInsert = jest.fn(() => Promise.resolve({ _id: '50<PASSWORD>' }))
await createUser({}, {
username: 'username',
password: '<PASSWORD>'
}).then((res) => {
// test if mocked insert method has been called
expect(db.usersInsert).toHaveBeenCalled()
// ... or better test for the returned promise value
})
})
})
```<issue_comment>username_1: You are stubbing on an instance of DB, not the actual DB class. Additionally I don't see the `db.usersInsert` method in your code. We can't write your code for you, but I can point you in the right direction. Also, I don 't use Jest but the concepts from Sinon are the same. The best thing to do in your case I believe is to stub out the prototype of the class method that returns an object you are interacting with.
Something like this:
```
// db.js
export default class Db {
getConnection() {}
}
// someOtherFile.js
import Db from './db';
const db = new Db();
export default async () => {
const conn = await db.getConnection();
await connection.collection.insert();
}
// spec file
import {
expect
} from 'chai';
import {
set
} from 'lodash';
import sinon from 'sinon';
import Db from './db';
import testFn from './someOtherFile';
describe('someOtherFile', () => {
it('should call expected funcs from db class', async () => {
const spy = sinon.spy();
const stub = sinon.stub(Db.prototype, 'getConnection').callsFake(() => {
return set({}, 'collection.insert', spy);
});
await testFn();
sinon.assert.called(spy);
});
});
```
Upvotes: 1 <issue_comment>username_2: There are multiple ways to go about this. I will list few of them
* Mock the DB class using a Jest manual mock. This could be cumbersome if you are using too many mongo functions. But since you are encapsulating most through the DB class it may still be manageable
* Use a mocked mongo instance. [This](https://github.com/williamkapke/mongo-mock) project allows you to simulate a MongoDB and persist data using js file
* Use a [in-memory](https://github.com/hemerajs/mongo-memory) mongodb
* Use a actual mongodb
I will showcase the first case here, which you posted about with code and how to make it work. So first thing we would do is update the `__mocks__/db.js` file to below
```
jest.mock('mongodb');
const mongo = require('mongodb')
var mock_collections = {};
var connectError = false;
var connected = false;
export default class Db {
constructor(uri, callback) {
this.__connectError = (fail) => {
connected = false;
connectError = fail;
};
this.clearMocks = () => {
mock_collections = {};
connected = false;
};
this.connect = () => {
return new Promise((resolve, reject) => {
process.nextTick(
() => {
if (connectError)
reject(new Error("Failed to connect"));
else {
resolve(true);
this.connected = true;
}
}
);
});
};
this.isConnected = () => connected;
this.connection = {
collection: (name) => {
mock_collections[name] = mock_collections[name] || {
__collection: name,
insert: jest.fn().mockImplementation((data) => {
const ObjectID = require.requireActual('mongodb').ObjectID;
let new_data = Object.assign({}, {
_id: new ObjectID()
},data);
return new Promise((resolve, reject) => {
process.nextTick(
() =>
resolve(new_data))
}
);
})
,
update: jest.fn(),
insertOne: jest.fn(),
updateOne: jest.fn(),
};
return mock_collections[name];
}
}
}
}
```
Now few explanations
* `jest.mock('mongodb');` will make sure any actual mongodb call gets mocked
* The `connected`, `connectError`, `mock_collections` are global variables. This is so that we can impact the state of the `Db` that your `user.js` loads. If we don't do this, we won't be able to control the mocked `Db` from within our tests
* `this.connect` shows how you can return a promise and also how you can simulate a error connecting to DB when you want
* `collection: (name) => {` makes sure that your call to `createUser` and your test can get the same collection interface and check if the mocked functions were actually called.
* `insert: jest.fn().mockImplementation((data) => {` shows how you can return data by creating your own implementation
* `const ObjectID = require.requireActual('mongodb').ObjectID;` shows how you can get an actual module object when you have already mocked `mongodb` earlier
Now comes the testing part. This is the updated `user.test.js`
```
jest.mock('../../lib/db');
import Db from '../../lib/db'
import { createUser } from '../../user'
const db = new Db()
describe('createUser()', () => {
beforeEach(()=> {db.clearMocks();})
test('should call mongoDB insert() and update() methods 2', async () => {
let User = db.connection.collection('users');
let user = await createUser({}, {
username: 'username',
password: '<PASSWORD>'
});
console.log(user);
expect(User.insert).toHaveBeenCalled()
})
test('Connection failure', async () => {
db.__connectError(true);
let ex = null;
try {
await createUser({}, {
username: 'username',
password: '<PASSWORD>'
})
} catch (err) {
ex= err;
}
expect(ex).not.toBeNull();
expect(ex.message).toBe("Failed to connect");
})
})
```
Few pointers again
* `jest.mock('../../lib/db');` will make sure that our manual mock gets loaded
* `let user = await createUser({}, {` since you are using `async`, you will not use `then` or `catch`. That is the point of using `async` function.
* `db.__connectError(true);` will set the global variable `connected` to `false` and `connectError` to true. So when `createUser` gets called in the test it will simulate a connection error
* `ex= err;`, see how I capture the exception and take out the expect call. If you do expect in the `catch` block itself, then when an exception is not raised the test will still pass. That is why I have done exception testing outside the `try/catch` block
Now comes the part of testing it by running `npm test` and we get
[](https://i.stack.imgur.com/g6dCN.png)
All of it is committed to below repo you shared
<https://github.com/jaqua/mongodb-class>
Upvotes: 3 [selected_answer]
|
2018/03/18
| 684 | 2,207 |
<issue_start>username_0: How to delete duplicate data with the resulting data collection in the Excel program
**EXAMPLE :**
```
AAA 111
BBB 555
AAA 111
CCC 222
```
**TO**
```
AAA 222
BBB 555
CCC 222
```<issue_comment>username_1: **Excel 2007 and above**
Bring your data in column, select column, and go to data > remove duplicates, that should do.
[](https://i.stack.imgur.com/sH5k1.png)
**Excel 2003**
Bring data in column, sort it in any order (ascending or descending). Use the formula as shown in picture. Delete all rows with zero.
[](https://i.stack.imgur.com/YnDC0.png)
In case you also want to sum the amount as well, there are two ways -
**With Pivot**
Create a Pivot on your data (You must have named columns for that, I have named them var1 and var2) - select the data, goto insert > pivot table.
[](https://i.stack.imgur.com/Xxiht.png)
It will automatically select range, and press OK, this will by default generate a pivot in new sheet
[](https://i.stack.imgur.com/RjDOP.png)
In new pivot fields, drag var1 to rows and var2 to values. By default (if all values are numeric), it will give you sum
[](https://i.stack.imgur.com/YExN3.png)
**Without Pivot**
Create unique list of items by one of the methods above, and sum using sumif formula (as shown below) -
[](https://i.stack.imgur.com/HxWHl.png)
Upvotes: 1 <issue_comment>username_2: What you really want is consolidation. The process which you are describing can be achieved by using Pivot Tables.
* Assign suitable headers to your data if they are not there.
* Select Data as you have now.
* Choose >Insert>Pivot Tables.
* In "Row Labels" put the column having `AAA, BBB` etc.
* In "Values" section put the other column of which you need `SUM`.
Upvotes: 1 [selected_answer]
|
2018/03/18
| 814 | 2,591 |
<issue_start>username_0: I was looking for ways to improve the performance of reading a big chunk of records without indexing(it's my understanding that it could cause issues with SAP support).
My query is simple it has to return all sales order that have invoices and all independent orders and invoices as there is no clear cycle on the system.
```
SELECT
T5."CardCode",
T5."CardName",
T4."ItemCode",
T4."ItemName",
T1."DocNum",
T1."DocDate",
T3."DocNum",
T3."DocDate",
T3."DocCur",
ISNULL(T2."PriceBefDi"*T2."Quantity", 0),
ISNULL(T2."DiscPrcnt", 0),
ISNULL(T2."LineTotal", 0),
ISNULL(T2."VatSum", 0),
ISNULL(T3."DocTotal", 0),
T2."TaxCode",
T5."Country"
FROM
RDR1 T0
FULL JOIN
ORDR T1 ON T0."DocEntry" = T1."DocEntry"
FULL JOIN
INV1 T2 ON T0."DocEntry" = T2."BaseEntry"
AND T0."LineNum" = T2."BaseLine"
AND T0."ObjType" = T2."BaseType"
FULL JOIN
OINV T3 ON T2."DocEntry" = T3."DocEntry"
JOIN
OITM T4 ON T2."ItemCode" = T4."ItemCode"
OR T0."ItemCode" = T4."ItemCode"
JOIN
OCRD T5 ON T3."CardCode" = T5."CardCode"
OR T1."CardCode" = T5."CardCode"
JOIN
OSTC T6 ON T2."TaxCode" = T6."Code"
OR T0."TaxCode" = T6."Code"
WHERE
T1.[DocDate] >= [%0]
AND T1.[DocDate] <= [%1]
OR T3.[DocDate] >= [%0]
AND T3.[DocDate] <= [%1]
```
While this query returns the desired result, the performance is not the best when i select a large date range.
I've done a bit of research around but all the solutions I found so far were not suitable/applicable for my case as it's an SAP system so I can't go messing with standard tables.
Thanks in advance.<issue_comment>username_1: I cannot know the model of your base so it's not easy to say but, to speed up performance: avoid FULL JOIN. If you read some [doc](https://www.w3schools.com/sql/sql_join_full.asp) you'll see it select all the row in each column you passed to it, which is a lot in your case.
My advice would be to rewrite the SQL request without FULL JOIN, only with NATURAL and INNER one.
Upvotes: 0 <issue_comment>username_2: You should examine the query plan (produced by `EXPLAIN`), but in general `OR` is notorious for not using indexes.
Try splitting the query into two parts and put each side of the `OR` in each:
```
SELECT
...
FROM ...
WHERE T1.[DocDate] BETWEEN [%0] AND [%1]
UNION
SELECT
...
FROM ...
WHERE T3.[DocDate] BETWEEN [%0] AND [%1]
```
Note also the use use of `BETWEEN` for more clarity.
Upvotes: 3 [selected_answer]
|
2018/03/18
| 817 | 3,140 |
<issue_start>username_0: I'm using coreos tectonic sandbox. My deployment.yaml file contains the container which should detect the docker daemon running on host via kubernetes.
The container uses docker daemon to identify docker events. For some reason docker daemon is not getting detected.
```
deployment.yaml
containers:
- name: idn-docker
image: sample/id-docker:latest
- name: docker-socket
mountpath: /var/run/docker.sock
```
can some please help me identify what the problem is.<issue_comment>username_1: For run Docker in a Docker, you have 2 options - DooD (Docker out of Docker) and DinD (Docker in Docker). I think you need a first because you need access to events on a host machine.
Here is a good [article](https://applatix.com/case-docker-docker-kubernetes-part-2/) about both schemes.
Example of pod's configuration:
```
apiVersion: v1
kind: Pod
metadata:
name: idn-docker
spec:
containers:
- name: idn-docker
image: sample/id-docker:latest
volumeMounts:
- mountPath: /var/run
name: docker-sock
volumes:
- name: docker-sock
hostPath:
path: /var/run
```
You can use a `containers` section from an example in your deployment because of its structure in `template` section is the same Pod template as a separate pod configuration.
But, please keep in mind, that solution will have some limitations:
>
> *Pod Networking* - Cannot access the container using Pod IP.
>
>
> *Pod Lifecycle* - On Pod termination, this container will keep running especially if the container was started with -d flag.
>
>
> *Pod Cleanup* - Graph storage will not be clean up after pod terminates.
>
>
> *Scheduling and Resource Utilization* - Cpu and Memory requested by Pod, will only be for the Pod and not the container spawned from the Pod. Also, limits on CPU and memory settings for the Pod will not be inherited by the spawned container.
>
>
>
Upvotes: 1 <issue_comment>username_2: Securing Docker Socket
----------------------
As in [Why we don't let non-root users run Docker in CentOS, Fedora, or RHEL](http://www.projectatomic.io/blog/2015/08/why-we-dont-let-non-root-users-run-docker-in-centos-fedora-or-rhel/), CoreOS by default does not have docker group and tries to prevent accesses to the docker socket by limiting accesses to root to be secure.
A Kuberneses pod which is a Docker container process will not run as root by default, and also SELinux labels will prevent the pods/containers from accessing the docker socket. Check with the pod process with ps -Z and the docker socket with ls -Z to verify the labels.
Allowing the access to Docker socket from a container is a security risk, but if it is mandatory to do so for some reason, one way is to run the pod as privileged using security context, and make the pod process run as root (with risks).
```
securityContext:
privileged: true
```
Not sure with tectonic but OpenShift by default prevents using hostPath mount and tectonic may have similar. Another stuff look into is the service account used for the pod and the role/bindings to the account.
Upvotes: 0
|
2018/03/18
| 276 | 877 |
<issue_start>username_0: I am looking for String which can have repetition of any alphabet preceding single alphabet (like abbb or cddd) from A to Z.
I am using `@"\b([a-zA-Z]{1})|([a-zA-Z]{1,})\b"`which is partially working. Not completely.I need to combine/merge these regex.
I would appreciate if anyone help me in this regard.<issue_comment>username_1: You could accomplish this with the following regex:
```
Regex regex = new Regex(@"\b[A-Za-z]([A-Za-z])\1+\b");
```
`([A-Za-z])` captures 2nd character from the word, `\1` references this character and `+` quantifier verifies that all characters till word boundary are equal to the captured 2nd character.
Upvotes: 3 [selected_answer]<issue_comment>username_2: If `aaaa` or `bbb` shouldn't match, I'd go with this one:
```
\b([A-Za-z])(?!\1)([A-Za-z])\2+\b
```
Otherwise @username_1's regex works fine.
Upvotes: 1
|
2018/03/18
| 549 | 2,056 |
<issue_start>username_0: I was doing practice on search algorithms for arrays.
I had come across with problem of finding missing and duplicate element by using XOR approach for a given array of elements containing integer from 1 to n.
[Find the missing and duplicate elements in an array in linear time and constant space](https://stackoverflow.com/questions/5766936/find-the-missing-and-duplicate-elements-in-an-array-in-linear-time-and-constant)
I am very well able to understand that how we can get the separate values of X and Y ( one is repeating and another is duplicate )
However I am unable to understand how are we able to decide that which one is repeating and which one duplicate.
( as per given solutions I could see xor result of XOR over list which has set-bit gives missing element and other list gives duplicate ).
However I am unable to understand the logic to reach on this decision.
Please help me to understand the logic behind this decision.<issue_comment>username_1: if you have found x & y then simply make a loop in existing array again and check if x doesn't exist in the array then the missing number is x otherwise it will be y.
Upvotes: -1 <issue_comment>username_2: While doing Xor operation, if number present even times Xor will be zero and if it presents an odd number of times then Xor will be that number.
Ex- xor of even freq number 5^5 or 3^3^3^3 will be zero.
Ex- xor of odd freq number 5^5^5^5^5=5 or 3^3^3=3 will be the same number.
If look closely you can see that finding a duplicate or missing both are referring to the same thing. Let's take an example and understand-
because if some number is missing it means a frequency of that number will be zero which is even number.
and if some integer is duplicate, it means a frequency of that number will be two which is even number and whenever you do Xor of the same number even number of times it will lead to zero
Ex- consider an dupArray(A)={1,2,3,5,5},missingArray(B)={1,2,3,4,\_} range from 1 to 5
Actual array(C)={1,2,3,4,5}
A^C={5}
B^C={5}
Upvotes: 0
|
2018/03/18
| 868 | 2,977 |
<issue_start>username_0: I am running a spark job through oozie spark action. The spark job uses hivecontext to perform some requirement. The cluster is configured with kerberos. When I submit the job using spark-submit form console, it is running successfully. But when I run the job from oozie, ending up with the following error.
```
18/03/18 03:34:16 INFO metastore: Trying to connect to metastore with URI thrift://localhost.local:9083
18/03/18 03:34:16 ERROR TSaslTransport: SASL negotiation failure
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94)
```
workflow.xml
```
${jobTracker}
${nameNode}
${master}
Analysis
com.demo.analyzer
${appLib}
--jars ${sparkLib} --files ${config},${hivesite} --num-executors ${NoOfExecutors} --executor-cores ${ExecutorCores} --executor-memory ${ExecutorMemory} --driver-memory ${driverMemory}
${emailToAddress}
Output of workflow ${wf:id()}
Results from line count: ${wf:actionData('shellAction')['NumberOfLines']}
Bash action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]
```
Do I need to configure anything related to Kerberos in workflow.xml ?. Am I missing anything here.
Any help appreciated.
Thanks in advance.<issue_comment>username_1: Upload your keytab in the server then refer this keytab file and credential as parameters in the spark-opts in your workflow. Let me know if it works or not. Thanks.
```
--keytab nagendra.keytab --principal "<EMAIL>"
--jars ${sparkLib} --files ${config},${hivesite} --num-executors ${NoOfExecutors} --executor-cores ${ExecutorCores} --executor-memory
${ExecutorMemory} --driver-memory ${driverMemory}
```
Upvotes: 0 <issue_comment>username_2: You need to add, hcat credentials for thrift uri in oozie workflow. This will enable successful authentication of metastore forthe thrift URI using Kerberos.
Add, below credentials tag in oozie workflow.
```
hcat.metastore.uri
${thrift\_uri}
hcat.metastore.principal
${principal}
```
And provide the credentials to the `spark` action as below:
```
${jobTracker}
${nameNode}
${master}
Analysis
com.demo.analyzer
${appLib}
--jars ${sparkLib} --files ${config},${hivesite} --num-executors ${NoOfExecutors} --executor-cores ${ExecutorCores} --executor-memory ${ExecutorMemory} --driver-memory ${driverMemory}
```
The `thrift_uri` and `principal`can be found in `hive-site.xml`. thrift\_uri will be set in the hive-site.xml property:
```
hive.metastore.uris
thrift://xxxxxx:9083
```
Also, principal will be set in hive-site.xml property:
```
hive.metastore.kerberos.principal
hive/\_HOST@domain.COM
```
Upvotes: 3 [selected_answer]
|
2018/03/18
| 787 | 2,645 |
<issue_start>username_0: I am using maven 3 and nexus for deploying our artifacts toa repository,
I saw that the non-unique option was deprecated in maven 3 so all SNAPSHOTS artifacts are being deployed with a timestamp and I am cool with that, the problem is that it looks like all artifacts are not under the version I specified (0.6-SNAPSHOT) so when I try to get this dependency the build is failing because it can't find it.
***This is the dependency definition in the pom:***
```
com.globals
globals-general
0.6-SNAPSHO
```
***And this is the error I get when I try to get the dependency:***
```
Failed to execute goal on project mprest-mgrid-infra-cache: Could not resolve dependencies for project com.mprest.mgrid.infra:mprest-mgrid-infra-cache:jar:0.6-SNAPSHOT: Could not find artifact com.mprest.mgrid.globals:mprest-mgrid-globals-general:jar:0.6-SNAPSHOT ->
```
***This is my pom relevant part:***
```
nexus-snapshots
http://nexus:8081/repository/maven-snapshots/
nexus-releases
http://nexus:8081/repository/maven-releases/
```
***And this is the structure:***
[](https://i.stack.imgur.com/3a6JC.png)<issue_comment>username_1: Upload your keytab in the server then refer this keytab file and credential as parameters in the spark-opts in your workflow. Let me know if it works or not. Thanks.
```
--keytab nagendra.keytab --principal "<EMAIL>"
--jars ${sparkLib} --files ${config},${hivesite} --num-executors ${NoOfExecutors} --executor-cores ${ExecutorCores} --executor-memory
${ExecutorMemory} --driver-memory ${driverMemory}
```
Upvotes: 0 <issue_comment>username_2: You need to add, hcat credentials for thrift uri in oozie workflow. This will enable successful authentication of metastore forthe thrift URI using Kerberos.
Add, below credentials tag in oozie workflow.
```
hcat.metastore.uri
${thrift\_uri}
hcat.metastore.principal
${principal}
```
And provide the credentials to the `spark` action as below:
```
${jobTracker}
${nameNode}
${master}
Analysis
com.demo.analyzer
${appLib}
--jars ${sparkLib} --files ${config},${hivesite} --num-executors ${NoOfExecutors} --executor-cores ${ExecutorCores} --executor-memory ${ExecutorMemory} --driver-memory ${driverMemory}
```
The `thrift_uri` and `principal`can be found in `hive-site.xml`. thrift\_uri will be set in the hive-site.xml property:
```
hive.metastore.uris
thrift://xxxxxx:9083
```
Also, principal will be set in hive-site.xml property:
```
hive.metastore.kerberos.principal
hive/\_HOST@<EMAIL>
```
Upvotes: 3 [selected_answer]
|
2018/03/18
| 216 | 705 |
<issue_start>username_0: I'd like to pass arguments to a piped node script. However the first argument to `node` takes a file. How can I bypass this and maintain the pipe?
```
echo "console.log(process.argv)" | node xxyyxx
```
>
> Cannot find module .../xxyyxx
>
>
><issue_comment>username_1: For a pipe, you could use the following construct to define the position where the pipe will output:
`echo "console.log(process.argv)" | xargs -I {} node {}`
Or, you could use command substitution if it doesn't have to be a pipe:
`node $(echo "console.log(process.argv)")`
Upvotes: -1 <issue_comment>username_2: Use `-` after node:
```
echo "console.log(process.argv)" | node - xxyyxx
```
Upvotes: 0
|
2018/03/18
| 2,479 | 5,534 |
<issue_start>username_0: File\_a
```
1 MIR6859-1 2340 DDX11L1 3222
2 MIR6859-1 4860 WASH7P 7074
3 WASH7P 326 MIR1302-2 670
4 FAM138A 15 MIR1302-2 5730
8 LOC729737 7270 OR4F5 64205
9 LOC729737 3070 OR4F5 68405
10 LOC729737 88330 LOC100132287 94996
11 LOC100132287 86996 LOC729737 96330
12 LOC100132287 80196 LOC729737 103130
13 LOC100132287 72396 LOC729737 110930
14 LOC100132287 61196 LOC729737 122130
15 LOC100132287 56596 LOC729737 126730
```
File\_b
```
10 LOC7 883
15 TYUI 678
8 LOC123 764
40 QWER 456
8 LOC125 783
```
and expected output is
```
1 MIR6859-1 2340 DDX11L1 3222
2 MIR6859-1 4860 WASH7P 7074
3 WASH7P 326 MIR1302-2 670
4 FAM138A 15 MIR1302-2 5730
8 LOC729737 7270 OR4F5 64205 LOC123 764 LOC125 783
9 LOC729737 3070 OR4F5 68405
10 LOC729737 88330 LOC100132287 94996 LOC7 883
11 LOC100132287 86996 LOC729737 96330
12 LOC100132287 80196 LOC729737 103130
13 LOC100132287 72396 LOC729737 110930
14 LOC100132287 61196 LOC729737 122130
15 LOC100132287 56596 LOC729737 126730 TYUI 678
40 QWER 456
```
So basically this is a Natural Join based on the equality of the first column in both files.
I tried various commands after searching the web -
```
join -a1 file_a file_b
```
and
```
paste file_a file_b
```
but not getting desired output.<issue_comment>username_1: **`awk`** solution:
```
awk 'NR == FNR{ a[$1] = ($1 in a? a[$1] OFS : "")$2 OFS $3; next }
$1 in a{ $0 = $0 OFS a[$1]; delete a[$1] }1;
END{ for (i in a) print i, a[i] }' file_b file_a
```
The output:
```
1 MIR6859-1 2340 DDX11L1 3222
2 MIR6859-1 4860 WASH7P 7074
3 WASH7P 326 MIR1302-2 670
4 FAM138A 15 MIR1302-2 5730
8 LOC729737 7270 OR4F5 64205 LOC123 764 LOC125 783
9 LOC729737 3070 OR4F5 68405
10 LOC729737 88330 LOC100132287 94996 LOC7 883
11 LOC100132287 86996 LOC729737 96330
12 LOC100132287 80196 LOC729737 103130
13 LOC100132287 72396 LOC729737 110930
14 LOC100132287 61196 LOC729737 122130
15 LOC100132287 56596 LOC729737 126730 TYUI 678
40 QWER 456
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Following `awk` may help you on same too.
```
awk 'FNR==NR{val=$1;$1="";sub(/^ +/,"");a[val]=a[val]?a[val] OFS $0:$0;next} {print $0,a[$1]}' FIle_b FIle_a
```
Adding a non-one liner form of solution too now.
```
awk '
FNR==NR{
val=$1;
$1="";
sub(/^ +/,"");
a[val]=a[val]?a[val] OFS $0:$0;
next}
{
print $0,a[$1]}
' FIle_b FIle_a
```
***Explanation:*** Adding explanation of code too here.
```
awk '
FNR==NR{ ##FNR==NR is a condition which is TRUE only when first Input_file named FIle_b in this case is being read. Do following in case of this condition is TRUE.
val=$1; ##Creating variable val whose value is first field of the current line.
$1=""; ##Nullifying the first field value in current line.
sub(/^ +/,""); ##Using sub out of the box utility of awk here to substitute initial space with NULL here on current line.
a[val]=a[val]?a[val] OFS $0:$0;##Creating an array named a whose index is variable val and it concatenates its own value in it too. to cover all duplicates in file.
next} ##next is awk out of the box keyword which will skip all further lines from here.
{
print $0,a[$1]} ##This print statement will only execute when 2nd Input_file is being read and it prints current line along with that value of araay a whose index is first field of array a.
' FIle_b FIle_a ##Mentioning Input_file(s) here, first Input_file is FIle_b and second Input_file is FIle_a.
```
Upvotes: 1 <issue_comment>username_3: I would like to add some remarks, to clarify the matter:
1) The join field is duplicated in `file_b`:
```
8 LOC123 764
...
8 LOC125 783
```
In a “real” join in a database, this would produce two lines of output
```
8 LOC729737 7270 OR4F5 64205 LOC123 764
8 LOC729737 7270 OR4F5 64205 LOC125 783
```
Not the concatenation you want:
```
8 LOC729737 7270 OR4F5 64205 LOC123 764 LOC125 783
```
So, it is misleading to say that you want a join.
2) Even if there were no duplicate lines, it would be a *full outer join* and not a *natural join* because you want all unpaired lines from both sources.
3) `paste` doesn’t work because it only pairs lines with corresponding line numbers in both files, it knows no concept of *join field*.
4) `join` (the unix command) doesn’t work not only because it wants input files sorted by the join field (which would be easy to provide) but also because it does not support full outer joins: the option `-a1` performs a left outer join (include unpairable lines from first source) and the option `-a2` performs a right outer join (include unpairable lines from second source), but using the two options together requires the further option `--nocheck-order` and produces something which is not a full outer join.
5) That said, I am also providing an `awk` solution, which is shorter than the ones provided so far and generates the same output:
```
awk '{k=$1;sub("^"k,"");a[k]=a[k] OFS $0}
END{for(k in a)print k a[k]|"sort -k1,1n"}' file_a file_b
```
6) If the amount of whitespace between output fields is not significant, you can also use the shorter
```
awk '{k=$1;$1="";a[k]=a[k] $0}
END{for(k in a)print k a[k]|"sort -k1,1n"}' file_a file_b
```
Upvotes: 0
|
2018/03/18
| 124 | 466 |
<issue_start>username_0: I want to use html onclik attribute inside tooltip/title. I allways got Uncaught SyntaxError even i use single or double quotes
Example:
```
'>
click me
```
Thanks in advance<issue_comment>username_1: ```
'>
```
Here is the error you forgot tu put the double quotes after onclick attribute.
Upvotes: -1 <issue_comment>username_2: You can change single quotes inside alert to template literal (es6):
```
click me
```
Upvotes: 0
|
2018/03/18
| 301 | 1,418 |
<issue_start>username_0: Prior to java 8, I knew when to use Abstract classes and interfaces but after Java 8 introduced default and static methods can be provided in interface.
An interface and an abstract class are almost similar except that you can create constructor in the abstract class whereas you can't do this in interface.
Apart from this, I want to know when to use Abstract class and when to use Java 8 interfaces in real-world examples.<issue_comment>username_1: If you need **multiple inheritance in java** you have to use Interfaces instead of Abstract Class.
Upvotes: 0 <issue_comment>username_2: By introducing default methods in interface, Java 8 may in some cases removes the need to introduce an intermediary abstract/base class that implements the interface with the default behavior for any subclass.
Sometimes, the abstract/base class is still required for other reasons (legacy, needs to contain fields and so for...) but the subclass of it may still benefit from a default implementation without the need to define it.
So the default implementation defined before Java 8 in the base class may be now defined directly in the interface as default methods.
The `stream()` method defined in the `Collection` interface is a good example.
`AbstractCollection` and its subclasses as `ArrayList` don't need to define it. It is directly inherited from the interface.
Upvotes: 3 [selected_answer]
|