date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2018/03/18
492
1,946
<issue_start>username_0: I am trying to make my Bullet fire continuously while touching the screen. This is what I got so far, but it's not really working. ``` func fireBullet() { let bullet = SKSpriteNode(imageNamed: "bullet") bullet.name = "Bullet" bullet.setScale(1.5) // Bullet Size bullet.position = player.position bullet.zPosition = 1 bullet.physicsBody = SKPhysicsBody(rectangleOf: bullet.size) bullet.physicsBody!.affectedByGravity = false bullet.physicsBody!.categoryBitMask = PhysicsCategories.Bullet bullet.physicsBody!.collisionBitMask = PhysicsCategories.None bullet.physicsBody!.contactTestBitMask = PhysicsCategories.Enemy self.addChild(bullet) let moveBullet = SKAction.moveTo(y: self.size.height + bullet.size.height, duration: 1) let deleteBullet = SKAction.removeFromParent() let bulletSequence = SKAction.sequence([bulletSound, moveBullet, deleteBullet]) let bulletRepeat = SKAction.repeatForever(bulletSequence) bullet.run(bulletRepeat) } ```<issue_comment>username_1: it's because of javascript's asynchronous functions, your `obtainAccessToken` called before getting response from server, you should use [promises](https://codecraft.tv/courses/angular/es6-typescript/promises/) for your services . Upvotes: 0 <issue_comment>username_2: @Arul, In general, the services must be return only an observable (we not subscribe in the service, we subscribe in the component) ``` //your service public obtainAccessToken(user):ServerResponse{ //just return a "observable" return this.http.post("/login",user,httpOptions) } //Your component //Your component is who take account of the observable login() { this.authenticationService.obtainAccessToken(this.user).subscribe(res=>{ this.serverResponse=res; console.log(this.serverResponse) //<--here have value }); console.log(this.serverResponse); //Outside subscribe have NO value } ``` Upvotes: 2
2018/03/18
816
3,145
<issue_start>username_0: I'm using UPDATING(col\_name) to check if column's value is updated or not inside the trigger. But the big problem is this command won't check value of :old and :new objects. UPDATING(col\_name) is true if col\_name existed in **set** part of query even with old value. I don't want to check :old.col1<>:new.col1 for each column separately. How can I check changing column value correctly? I want to do this in a generic way. like : ``` SELECT col_name bulk collect INTO included_columns FROM trigger_columns where tbl_name ='name'; l_idx := included_columns.first; while (l_idx is not null) loop IF UPDATING(included_columns(l_idx)) THEN //DO STH return; END IF; l_idx := included_columns.next(l_idx); end loop; ``` Thanks<issue_comment>username_1: You can define a global function similar to the following: ``` CREATE OR REPLACE FUNCTION NUMBER_HAS_CHANGED(pinVal_1 IN NUMBER, pinVal_2 IN NUMBER) RETURN CHAR IS BEGIN IF (pinVal_1 IS NULL AND pinVal_2 IS NOT NULL) OR (pinVal_1 IS NOT NULL AND pinVal_2 IS NULL) OR pinVal_1 <> pinVal_2 THEN RETURN 'Y'; ELSE RETURN 'N'; END IF; END NUMBER_HAS_CHANGED; ``` Now in your trigger you just write ``` IF NUMBER_HAS_CHANGED(:OLD.COL1, :NEW.COL1) = 'Y' THEN -- whatever END IF; ``` Note that this function is defined to return CHAR so it can also be called from SQL statements, if needed - for example, in a CASE expression. Remember that in Oracle, there is no BOOLEAN type in the database - only in PL/SQL. You'll probably want to create additional versions of this function to handle VARCHAR2 and DATE values, for a start, but since it's a matter of replacing the data types and changing the name of the function I'll let you have the fun of writing them. :-) Best of luck. Upvotes: 0 <issue_comment>username_2: IN a comment you said: > > "I want to do this in a generic way and manage it safer. put columns which are important to trigger in a table and don't put many IF in my trigger. " > > > I suspected that was what you wanted. The only way you can make that work is to use dynamic SQL to assemble and execute a PL/SQL block. That is a complicated solution, for no material benefit. I'm afraid I laughed at your use of *"safer"* there. Triggers are already horrible: they make it harder to reason about what is happening in the database and can lead to unforeseen scalability issues. Don't make them worse by injecting dynamic SQL into the mix. Dynamic SQL is difficult because it turns compilation errors into runtime errors. What is your objection to hardcoding column names and IF statements in a trigger? It's safer because the trigger is compiled. It's easier to verify the trigger logic because the code is *right there*. If this is just about not wanting to type, then you can generate the trigger source from the data dictionary views (such as `all_tab_cols`) or even your own metadata tables if you must (i.e. `trigger_columns`). Upvotes: 3 [selected_answer]
2018/03/18
719
2,712
<issue_start>username_0: How would I order a list of items where some of the items contain double quotes? * Advance * Access * “Chain free” deal * Binding * Broker Doing this `FaqData = repo.FaqData.OrderBy(q => q.Description)` results in the following * “Chain free” deal * Advance * Access * Binding * Broker Tried this as well ``` FaqData = repo.FaqData.OrderBy(q => q.QuestionDescription.Replace("”", "")) ```<issue_comment>username_1: You can define a global function similar to the following: ``` CREATE OR REPLACE FUNCTION NUMBER_HAS_CHANGED(pinVal_1 IN NUMBER, pinVal_2 IN NUMBER) RETURN CHAR IS BEGIN IF (pinVal_1 IS NULL AND pinVal_2 IS NOT NULL) OR (pinVal_1 IS NOT NULL AND pinVal_2 IS NULL) OR pinVal_1 <> pinVal_2 THEN RETURN 'Y'; ELSE RETURN 'N'; END IF; END NUMBER_HAS_CHANGED; ``` Now in your trigger you just write ``` IF NUMBER_HAS_CHANGED(:OLD.COL1, :NEW.COL1) = 'Y' THEN -- whatever END IF; ``` Note that this function is defined to return CHAR so it can also be called from SQL statements, if needed - for example, in a CASE expression. Remember that in Oracle, there is no BOOLEAN type in the database - only in PL/SQL. You'll probably want to create additional versions of this function to handle VARCHAR2 and DATE values, for a start, but since it's a matter of replacing the data types and changing the name of the function I'll let you have the fun of writing them. :-) Best of luck. Upvotes: 0 <issue_comment>username_2: IN a comment you said: > > "I want to do this in a generic way and manage it safer. put columns which are important to trigger in a table and don't put many IF in my trigger. " > > > I suspected that was what you wanted. The only way you can make that work is to use dynamic SQL to assemble and execute a PL/SQL block. That is a complicated solution, for no material benefit. I'm afraid I laughed at your use of *"safer"* there. Triggers are already horrible: they make it harder to reason about what is happening in the database and can lead to unforeseen scalability issues. Don't make them worse by injecting dynamic SQL into the mix. Dynamic SQL is difficult because it turns compilation errors into runtime errors. What is your objection to hardcoding column names and IF statements in a trigger? It's safer because the trigger is compiled. It's easier to verify the trigger logic because the code is *right there*. If this is just about not wanting to type, then you can generate the trigger source from the data dictionary views (such as `all_tab_cols`) or even your own metadata tables if you must (i.e. `trigger_columns`). Upvotes: 3 [selected_answer]
2018/03/18
548
1,826
<issue_start>username_0: How do I properly count the amount of replaced strings in Javascript. For some reason the increment function below gives me "0" instead of "2". [JSfiddle](https://jsfiddle.net/oa6wgpbf/) ``` var input = "A B C A"; function Replace(str) { return str.replace(/A/g, "1"); Counter(); } var increment = 0; function Counter() { increment++; } alert(Replace(input)); // 1 B C 1 alert(increment); // 2 ```<issue_comment>username_1: ``` var input = "A B C A"; function Replace(str) { Counter(); return str.replace(/A/g, "1"); } var increment = 0; function Counter() { increment++; } alert(Replace(input)); // 1 B C 1 alert(increment); // 2 ``` Counter() should be called before return statement. Upvotes: 0 <issue_comment>username_2: *Note: I see you have already accepted a solution. This is a bit of a pity since our answer 1 to 1 has the same content and I posted my answer about 10 min earlier anyway. Well, as always, that's your decision.* **Have a look at a working solution!** So you can count the amount of **replaces** using this simple code: ```js var number = 0; var string = "A B C A" document.body.innerHTML = string.replace(/A/g, function() { // will be called if a "A" gets found number++; // increases count of "number" return "1" // return will replace "A" with "1" }); console.log(number) // result will be "2" (because "A" gots found 2times) ``` Upvotes: 2 <issue_comment>username_3: KISS, You can use a function in the replacement part of `replace` function. There you can increase the counter value. ```js var input = "A B C A"; var count = 0; console.log(input.replace(/A/g, function(x){count+=1;return "1"})) console.log(count) ``` Upvotes: 5 [selected_answer]
2018/03/18
638
2,239
<issue_start>username_0: why am i getting this error at the line " let postObject = [String:AnyObject] = [ " > > Cannot assign to value: function call returns immutable value > > > ``` @IBAction func HandleSendButton(_ sender: Any) { let postRef = Storage.storage().reference().child("messages").child("\(selectedUser).[“id”]") let postObject = [[String:AnyObject]]() = [ "From": [ "uid": User.uid, "username": User.username, "photoURL": User.photoURL.absoluteString ], "Message": textView.text, "timestamp": [".sv":"timestamp"] ] as [String:Any] postRef.setValue(postObject, withCompletionBlock: { error, ref in if error == nil { self.dismiss(animated: true, completion: nil) } else { // Handle the error } }) } ```<issue_comment>username_1: ``` var input = "A B C A"; function Replace(str) { Counter(); return str.replace(/A/g, "1"); } var increment = 0; function Counter() { increment++; } alert(Replace(input)); // 1 B C 1 alert(increment); // 2 ``` Counter() should be called before return statement. Upvotes: 0 <issue_comment>username_2: *Note: I see you have already accepted a solution. This is a bit of a pity since our answer 1 to 1 has the same content and I posted my answer about 10 min earlier anyway. Well, as always, that's your decision.* **Have a look at a working solution!** So you can count the amount of **replaces** using this simple code: ```js var number = 0; var string = "A B C A" document.body.innerHTML = string.replace(/A/g, function() { // will be called if a "A" gets found number++; // increases count of "number" return "1" // return will replace "A" with "1" }); console.log(number) // result will be "2" (because "A" gots found 2times) ``` Upvotes: 2 <issue_comment>username_3: KISS, You can use a function in the replacement part of `replace` function. There you can increase the counter value. ```js var input = "A B C A"; var count = 0; console.log(input.replace(/A/g, function(x){count+=1;return "1"})) console.log(count) ``` Upvotes: 5 [selected_answer]
2018/03/18
747
3,013
<issue_start>username_0: I'm currently doing a proof of concept for an Android app with the new Firestore as backend/db. I need to fetch a bunch of documents by their id (they are all in the same collection) Right now, I'm looping thru the id list and fetching them one by one and storing them in a list which in turn updates a RecycleView in the app. This seems to be a lot of work and it does not perform very well. What is the correct way to fetch a list of documents from Firestore without having to loop all the ids and getting them one by one? Right now my code looks like this ``` for (id in ids) { FirebaseFirestore.getInstance().collection("test_collection").whereEqualTo(FieldPath.documentId(), id) .get() .addOnCompleteListener { if (it.isSuccessful) { val res = it.result.map { it.toObject(Test::class.java) }.firstOrNull() if (res != null) { testList.add(res) notifyDataSetChanged() } } else { Log.w(TAG, "Error getting documents.", it.exception) } } } ```<issue_comment>username_1: Firestore does not currently support query by IDs. According to [AngularFirebase](https://angularfirebase.com/lessons/advanced-firestore-nosql-data-structure-examples/ "AngularFireBase"), this is in the roadmap for development in the future, but its not official: > > Keep in mind, Firestore is still in beta. Firebase engineers hinted at some really cool features on the roadmap (geo queries, query by array of ids) - I’ll be sure to keep you posted :) > > > Upvotes: 4 [selected_answer]<issue_comment>username_2: This is the way i'm using until they will add this option I made that with AngularFire2 but also with the rxfire libary is the same if you are using react or vue. You can map the null items before subscribing if there are some documents that deleted. ``` const col = this.fire.db.collection('TestCol'); const ids = ['a1', 'a2', 'a3', 'a4']; const queries = ids.map(el => col.doc(el).valueChanges()); const combo = combineLatest(...queries) .subscribe(console.log) ``` Upvotes: 3 <issue_comment>username_3: Here is the way you can get specific documents, here is a sample code: ``` List documentsIds = {your document ids}; FirebaseFirestore.getInstance().collection("collection\_name") .whereIn(FieldPath.documentId(), documentsIds).get().addOnCompleteListener(new OnCompleteListener() { @Override public void onComplete(@NonNull Task task) { if (task.isSuccessful()) { for (DocumentSnapshot document : Objects.requireNonNull(task.getResult())) { YourClass object = document.toObject(YourClass.class); // add to your custom list } } } }).addOnFailureListener(new OnFailureListener() { @Override public void onFailure(@NonNull Exception e) { e.printStackTrace(); } }); ``` Upvotes: 0
2018/03/18
1,728
3,540
<issue_start>username_0: How would one apply this command I'm using in vim to sed or awk? `:%s/\v\n(\D)/ \1/g` Explanation ----------- * `:%`: Complete buffer * `s/`: Substitute * `\v`: Use regex magic...I frankly still don't understand this * `\n`: Match new line * `(\D)`: Match 'Not a digit'. Surrounded by braces to mark it as group * `/ \1/g`: Replace matches with space and group 1 * `/g`: Confirm replace for all occurences **INPUT** ``` Datum Transaktion Branche/Partner Verrechnet Belastung Gutschrift Bonuspunkte 24.12.2017 "Zinsen* Zinsperiode: vom 24.11. bis 24.12. Zins auf EUR 23'001'011.43 vom 20.12.-20.12. EUR 121.31 Zins auf EUR 23'002'045.73 vom 21.12.-23.12. EUR 173.99 Zins auf EUR 23'006'067.38 vom 24.12.-24.12. EUR 191.33" Ja 239.42 0.0 23.12.2017 "Acme Ent. Lebensmittelgeschäft " Lebensmittelgeschäft Ja 121.65 121.7 20.12.2017 "Restaurant Lorem ipsum Restaurant " Restaurant Ja 15.00 15.0 ``` **OUTPUT** ``` Datum Transaktion Branche/Partner Verrechnet Belastung Gutschrift Bonuspunkte 24.12.2017 "Zinsen* Zinsperiode: vom 24.11. bis 24.12. Zins auf EUR 23'001'011.43 vom 20.12.-20.12. EUR 121.31 Zins auf EUR 23'002'045.73 vom 21.12.-23.12. EUR 173.99 Zins auf EUR 23'006'067.38 vom 24.12.-24.12. EUR 191.33" Ja 239.42 0.0 23.12.2017 "Acme Ent. Lebensmittelgeschäft " Lebensmittelgeschäft Ja 121.65 121.7 20.12.2017 "Restaurant Lorem ipsum Restaurant " Restaurant Ja 15.00 15.0 ```<issue_comment>username_1: **`Awk`** equivalent would look as follows: ``` awk '{ printf "%s%s", (NR==1? "" : (/^[0-9]/? ORS : OFS)), $0 }END{ print "" }' file ``` * `OFS` - output field separator (defaults to space char) * `ORS` - output record separator The output: ``` Datum Transaktion Branche/Partner Verrechnet Belastung Gutschrift Bonuspunkte 24.12.2017 "Zinsen* Zinsperiode: vom 24.11. bis 24.12. Zins auf EUR 23'001'011.43 vom 20.12.-20.12. EUR 121.31 Zins auf EUR 23'002'045.73 vom 21.12.-23.12. EUR 173.99 Zins auf EUR 23'006'067.38 vom 24.12.-24.12. EUR 191.33" Ja 239.42 0.0 23.12.2017 "Acme Ent. Lebensmittelgeschäft " Lebensmittelgeschäft Ja 121.65 121.7 20.12.2017 "Restaurant Lorem ipsum Restaurant " Restaurant Ja 15.00 15.0 ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: `sed`, `perl` solutions: **perl** ``` tr -d '\v\n\r' < input.txt | perl -pe 's/(\d{2}\.\d{2}\.\d{4})/\n\1/g' ``` **sed** ``` tr -d '\v\n\r' < input.txt | sed 's/\([0-9]\{2\}\.[0-9]\{2\}\.[0-9]\{4\}\)/\n\1/g' ``` **Output:** ``` Datum Transaktion Branche/Partner Verrechnet Belastung Gutschrift Bonuspunkte 24.12.2017 "Zinsen* Zinsperiode: vom 24.11. bis 24.12. Zins auf EUR 23'001'011.43 vom 20.12.-20.12. EUR 121.31 Zins auf EUR 23'002'045.73 vom 21.12.-23.12. EUR 173.99 Zins auf EUR 23'006'067.38 vom 24.12.-24.12. EUR 191.33" Ja 239.42 0.0 23.12.2017 "Acme Ent. Lebensmittelgeschäft " Lebensmittelgeschäft Ja 121.65 121.7 20.12.2017 "Restaurant Lorem ipsum Restaurant " Restaurant Ja 15.00 15.0 ``` **Explanations:** the `tr` command will remove all the `\v`, `\n`, and `\r` characters, then the `sed` command will add a new line before each date element to create your CSV structure. Upvotes: 0 <issue_comment>username_3: Or just use vim in batch mode :) about 8 times slower, but it's an option. ``` echo '%s/blah/BLAH/ | w'|vi -e file.cfg ``` Source: <https://www.brianstorti.com/vim-as-the-poor-mans-sed/> Upvotes: 2
2018/03/18
1,082
2,438
<issue_start>username_0: I am trying to make an apache superset chart with map box view. I have to set latitude and longitude columns. But these data are in a postgresql + postgis database. So, latitude and longitude are in the same column location. An sql query would be like this: `SELECT ST_X(location), ST_Y(location) FROM Address` How can I make superset get latitude with the function `ST_X()`?<issue_comment>username_1: **`Awk`** equivalent would look as follows: ``` awk '{ printf "%s%s", (NR==1? "" : (/^[0-9]/? ORS : OFS)), $0 }END{ print "" }' file ``` * `OFS` - output field separator (defaults to space char) * `ORS` - output record separator The output: ``` Datum Transaktion Branche/Partner Verrechnet Belastung Gutschrift Bonuspunkte 24.12.2017 "Zinsen* Zinsperiode: vom 24.11. bis 24.12. Zins auf EUR 23'001'011.43 vom 20.12.-20.12. EUR 121.31 Zins auf EUR 23'002'045.73 vom 21.12.-23.12. EUR 173.99 Zins auf EUR 23'006'067.38 vom 24.12.-24.12. EUR 191.33" Ja 239.42 0.0 23.12.2017 "Acme Ent. Lebensmittelgeschäft " Lebensmittelgeschäft Ja 121.65 121.7 20.12.2017 "Restaurant Lorem ipsum Restaurant " Restaurant Ja 15.00 15.0 ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: `sed`, `perl` solutions: **perl** ``` tr -d '\v\n\r' < input.txt | perl -pe 's/(\d{2}\.\d{2}\.\d{4})/\n\1/g' ``` **sed** ``` tr -d '\v\n\r' < input.txt | sed 's/\([0-9]\{2\}\.[0-9]\{2\}\.[0-9]\{4\}\)/\n\1/g' ``` **Output:** ``` Datum Transaktion Branche/Partner Verrechnet Belastung Gutschrift Bonuspunkte 24.12.2017 "Zinsen* Zinsperiode: vom 24.11. bis 24.12. Zins auf EUR 23'001'011.43 vom 20.12.-20.12. EUR 121.31 Zins auf EUR 23'002'045.73 vom 21.12.-23.12. EUR 173.99 Zins auf EUR 23'006'067.38 vom 24.12.-24.12. EUR 191.33" Ja 239.42 0.0 23.12.2017 "Acme Ent. Lebensmittelgeschäft " Lebensmittelgeschäft Ja 121.65 121.7 20.12.2017 "Restaurant Lorem ipsum Restaurant " Restaurant Ja 15.00 15.0 ``` **Explanations:** the `tr` command will remove all the `\v`, `\n`, and `\r` characters, then the `sed` command will add a new line before each date element to create your CSV structure. Upvotes: 0 <issue_comment>username_3: Or just use vim in batch mode :) about 8 times slower, but it's an option. ``` echo '%s/blah/BLAH/ | w'|vi -e file.cfg ``` Source: <https://www.brianstorti.com/vim-as-the-poor-mans-sed/> Upvotes: 2
2018/03/18
1,041
2,279
<issue_start>username_0: I have an UIImageView which is as big as the whole view. When I insert an image, I would like for the image view to shrink itself in order for it to be as big as the image I insert itself. I cannot find a way to do it.<issue_comment>username_1: **`Awk`** equivalent would look as follows: ``` awk '{ printf "%s%s", (NR==1? "" : (/^[0-9]/? ORS : OFS)), $0 }END{ print "" }' file ``` * `OFS` - output field separator (defaults to space char) * `ORS` - output record separator The output: ``` Datum Transaktion Branche/Partner Verrechnet Belastung Gutschrift Bonuspunkte 24.12.2017 "Zinsen* Zinsperiode: vom 24.11. bis 24.12. Zins auf EUR 23'001'011.43 vom 20.12.-20.12. EUR 121.31 Zins auf EUR 23'002'045.73 vom 21.12.-23.12. EUR 173.99 Zins auf EUR 23'006'067.38 vom 24.12.-24.12. EUR 191.33" Ja 239.42 0.0 23.12.2017 "Acme Ent. Lebensmittelgeschäft " Lebensmittelgeschäft Ja 121.65 121.7 20.12.2017 "Restaurant Lorem ipsum Restaurant " Restaurant Ja 15.00 15.0 ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: `sed`, `perl` solutions: **perl** ``` tr -d '\v\n\r' < input.txt | perl -pe 's/(\d{2}\.\d{2}\.\d{4})/\n\1/g' ``` **sed** ``` tr -d '\v\n\r' < input.txt | sed 's/\([0-9]\{2\}\.[0-9]\{2\}\.[0-9]\{4\}\)/\n\1/g' ``` **Output:** ``` Datum Transaktion Branche/Partner Verrechnet Belastung Gutschrift Bonuspunkte 24.12.2017 "Zinsen* Zinsperiode: vom 24.11. bis 24.12. Zins auf EUR 23'001'011.43 vom 20.12.-20.12. EUR 121.31 Zins auf EUR 23'002'045.73 vom 21.12.-23.12. EUR 173.99 Zins auf EUR 23'006'067.38 vom 24.12.-24.12. EUR 191.33" Ja 239.42 0.0 23.12.2017 "Acme Ent. Lebensmittelgeschäft " Lebensmittelgeschäft Ja 121.65 121.7 20.12.2017 "Restaurant Lorem ipsum Restaurant " Restaurant Ja 15.00 15.0 ``` **Explanations:** the `tr` command will remove all the `\v`, `\n`, and `\r` characters, then the `sed` command will add a new line before each date element to create your CSV structure. Upvotes: 0 <issue_comment>username_3: Or just use vim in batch mode :) about 8 times slower, but it's an option. ``` echo '%s/blah/BLAH/ | w'|vi -e file.cfg ``` Source: <https://www.brianstorti.com/vim-as-the-poor-mans-sed/> Upvotes: 2
2018/03/18
687
2,310
<issue_start>username_0: I'm trying to export to a CSV file, keys from the dictionary, and I want each key to be written as many times as his value. I want my output to be `['battleaxe', 'dagger', 'dagger', 'dagger', 'gold coin']` but instead I'm getting `['battleaxe', 'daggerdaggerdagger', 'gold coin']` My code: ``` def export_inventory(inventory, filename="export_inventory.csv"): with open(filename, "w") as csvfile: new = csv.writer(csvfile, quoting=csv.QUOTE_MINIMAL) new.writerow((k*v for (k,v) in inventory.items())) ``` Test function: ``` def test_export_inventory(self): export_inventory({'dagger': 3, 'gold coin': 1, "battleaxe": 1}, "test_inventory_export.csv") with open("test_inventory_export.csv", newline='') as csvfile: expected = ["dagger", "gold coin", "battleaxe", "dagger", "dagger"] expected.sort() reader = csv.reader(csvfile, delimiter=',', quotechar='|') for row in reader: row.sort() self.assertListEqual(expected, row) ```<issue_comment>username_1: This might help. As we don't know the contents of the CSV, I am assuming a structure similar to the dict below based on your desired ouput. ``` weapons = {'battleaxe': 1, 'dagger': 3, 'gold coin': 1} result = [] for item, times in weapons.items(): for t in range(times): result.append(item) print (result) # ['battleaxe', 'dagger', 'dagger', 'dagger', 'gold coin'] ``` Note that in your code you are doing k\*v, effectively multiplying the item string as many times as the item is present. In my suggested code, we are adding it to a list instead. Upvotes: 0 <issue_comment>username_2: ``` new.writerow((k for k,v in inventory.items() for _ in range(v))) ``` k\*v is just string multiplication; 'a'\*3 # aaaa Upvotes: 1 <issue_comment>username_3: The reason your code isn't working is because k\*v will replicate the string v times e.g: o\*3 gives use ooo. ``` def export_inventory(inventory, filename="export_inventory.csv"): l = [] with open(filename, "w") as csvfile: new = csv.writer(csvfile, quoting=csv.QUOTE_MINIMAL) for key,value in inventory.items(): l+=[key for _ in range(value)] new.writerow(l) ``` Upvotes: 2 [selected_answer]
2018/03/18
646
2,314
<issue_start>username_0: Developing MVC application and in Razor i imported those scripts: ``` ``` i set datetimepicker like this: ``` $('#datetimepicker').datetimepicker({ locale: 'cs', format: 'DD.MM.YYYY', dayViewHeaderFormat: 'MMMM YYYY', minDate: today, stepping: 1, showTodayButton: true, allowInputToggle: true }); $('.timepicker').datetimepicker({ locale: 'cs', format: 'HH:mm', dayViewHeaderFormat: 'MMMM YYYY', stepping: 5, allowInputToggle: true }); ``` and my datetimepicker nor timepicker are showing arrowbuttons for control as you can see on picture: [![enter image description here](https://i.stack.imgur.com/Wr50O.png)](https://i.stack.imgur.com/Wr50O.png) Both has those controlling elements on its place but without icons and i can not find out why.<issue_comment>username_1: So the problem is in MVC bootstrap import through Nuget packages. There is problem with loading bootstrap glyphicons file.. So you need to import it from bootstrapcdn like this: ``` ``` or just download and load it from your project. Upvotes: 2 <issue_comment>username_2: In bootstrap 4 I replaced those icons with font icons to make it work: ``` $(".timepicker").datetimepicker({ icons: { up: 'fa fa-angle-up', down: 'fa fa-angle-down' }, format: 'LT' }); ``` Upvotes: 2 <issue_comment>username_3: Bootstrap doesn't support glyphicons. You need to give a separate icon library to it. ``` $('.datepicker').datetimepicker({ format: 'DD/MM/YYYY HH:mm', useCurrent: false, showTodayButton: true, showClear: true, toolbarPlacement: 'bottom', sideBySide: true, icons: { time: "fa fa-clock-o", date: "fa fa-calendar", up: "fa fa-arrow-up", down: "fa fa-arrow-down", previous: "fa fa-chevron-left", next: "fa fa-chevron-right", today: "fa fa-clock-o", clear: "fa fa-trash-o" } }); ``` Upvotes: 4 <issue_comment>username_4: In bootstrap 4 with Font-Awesome icons just add option in initialization like following. ``` $('#start_date').datetimepicker({ fontAwesome: true }); ``` or with simple line icons ``` $('#start_date').datetimepicker({ bootcssVer: 4 }); ``` Upvotes: -1
2018/03/18
8,508
32,661
<issue_start>username_0: I install the VS2017 on Windows 7. After some time I receive the error: ``` MSI: C:\ProgramData\Microsoft\VisualStudio\Packages\Microsoft.VisualStudio.MinShell.Msi,version=15.6.27421.1\Microsoft.VisualStudio.MinShell.Msi.msi, Properties: REBOOT=ReallySuppress ARPSYSTEMCOMPONENT=1 MSIFASTINSTALL="7" VSEXTUI="1" VS7.3643236F_FC70_11D3_A536_0090278A1BB8="G:\Program Files (x86)\Microsoft Visual Studio\2017\Community" Return code: 1632 Return code details: The Temp folder is on a drive that is full or is inaccessible. Free up space on the drive or verify that you have write permission on the Temp folder. Log G:\TEMP\dd_setup_20180318121545_006_Microsoft.VisualStudio.MinShell.Msi.log ``` I have checked the G: where the TEMP located. It has 200 GB free. BUT one strange thing: this folder and all other folders are Read-Only. I uncheck it in the Properties, then close Properties dialog, open it again: it is Read-Only. I can modify it, even MSI installer could: it created the log file there. But in the middle of installation the error occurs. What is it and how I can solve this problem? I run with log: ``` Machine policy value 'DisableUserInstalls' is 0 SRSetRestorePoint skipped for this transaction. Note: 1: 1336 2: 3 3: C:\Windows\Installer\ MainEngineThread is returning 1632 No System Restore sequence number for this installation. User policy value 'DisableRollback' is 0 Machine policy value 'DisableRollback' is 0 Incrementing counter to disable shutdown. Counter after increment: 0 Note: 1: 1402 2: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Installer\Rollback\Scripts 3: 2 Note: 1: 1402 2: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Installer\InProgress 3: 2 Decrementing counter to disable shutdown. If counter >= 0, shutdown will be denied. Counter after decrement: -1 Restoring environment variables Decrementing counter to disable shutdown. If counter >= 0, shutdown will be denied. Counter after decrement: -1 MainEngineThread is returning 1632 ```<issue_comment>username_1: > > ***Disc Space Reclaiming - Quick Wins?***: Too much to read? [**The essential options**](https://serverfault.com/a/642178/20599) (arguably). > > > --- Final Summary ------------- This issue turned out to be a redirected `TEMP` and `C:\Windows\Installer` cache folder - with the latter being on an unavailable drive. **Please be careful redirecting system folders**, in particular `C:\Windows\Installer`. It is a super-hidden system folder and side-effects are very common. You must make sure that the relocated folder has the correct ACL permissions that the original folder had. **This is crucially important for security reasons**. For one thing the whole folder could be deleted by someone who do not understand what it is for - making all packages un-uninstallable and un-maintainable. There are also other security reasons. Also: putting this folder on the network is **not** technically sound in my opinion - problems **will** result. A local drive is also problematic if drive letters change. Which brings me to the next point: Lacking Space for your System SSD Drive? ---------------------------------------- If your real issue is lacking disk space on your system SSD drive, please consider some alternatives listed below. Proceed with care and at your own risk with every option. Most of them should be harmless. ***Disc Space Visualizing***: I have an ancient tool called `SpaceMonger.exe` which shows me a visual representation of whatever is taking up my disc space. Very useful. It seems this tool is no longer supported. Maybe check <https://en.wikipedia.org/wiki/WinDirStat> for a similar tool (untested by me - run it by [virustotal.com](http://www.virustotal.com)). ***DriverStore***: And a word to the resident hacker in all computer guys: no, no - don't try to redirect `%SystemRoot%\System32\DriverStore` (!). "*Seductive The Dark Side Is*". "*Run Forrest, Run!*". "*Careful With That Axe Eugene*". Etc... You get the picture. Leaving out Monty Python allusions for now. Seriously: I do not know what [**low-level stuff**](https://technet.microsoft.com/en-us/library/2007.11.windowsconfidential.aspx) could be involved in the boot process. One would have to ask [<NAME>](https://blogs.msdn.microsoft.com/oldnewthing/), but don't. He has important things to do. However: [pnputil.exe, DriverStore Explorer - your own risk](https://superuser.com/a/597395/11906). Don't do it :-). Overall Suggestions ------------------- > > **UPDATE**: For laptops I like to use a [**high capacity, low-profile USB flash drive**](http://a.co/3gIQfd7) and / or [**a high capacity SD-card**](http://a.co/1iJbAEi) permanently sitting in a port to > hold my **downloads and installers**, **VS Help files**, maybe even **source code** (riskier). An obvious, but somewhat "clunky" option. > > > [![.](https://i.stack.imgur.com/faklr.png)](https://i.stack.imgur.com/faklr.png) > > One can combine this drive with [the Library feature in Windows Explorer](https://superuser.com/a/1018448/11906) > to show the flash drive under whatever library you want (Downloads, Videos, Pictures, Source, etc...). > > > My preferred **desktop** disc cleanup options below would be: **7**, **19**, **2**, **18**, 1, 6, 11, 12 (in that order). Preferred options for **laptops**: **7**, **19**, **2**, **18**, 6, 10 (reduce max cache sizes), 15, 17, 3 (in that order). **The real-world approach for me** is a slightly different order: **2** (purge obsolete Windows Updates - this may also trim WinSxS - but I am not positive), **19** (uninstall unneccessary software - can be relatively quick), then I run `SpaceMonger.exe` to find space hogs and move them - this often involves zapping the `Downloads folder` (**7**) and *purging*, *moving* or *clouding* media files (Pictures, Videos, Music), then **6** for developer PCs (jogging Visual Studio and uninstall useless SDKs and help files), and **9** (eliminate hibernation - not great for laptops), **18** (enable compression - can take forever), and finally I might zap the recovery partitions (laptops) and create a new partition in its place to allow data files to be stored there (freeing up system partition space). This zapping is a high-risk operation - obviously. Very error-prone (especially if inexperienced users use the diskpart command-line tool or a Linux Live Boot tool - described below). And obviously verify that you have **installation media** AND a **valid license key** before wiping out recovery partitions - it has to be mentioned. Data files I move are usually: source code repository, downloads folder, outlook PST file, images and videos, etc... **This procedure should reclaim many gigabytes of disc space**. Don't do it for fun though - though risk should be acceptable for most of these options (barring the recovery partition zapping - it is relatively simple to do, but error prone). Cleanup Options --------------- Apply healthy skepticism to these options. They are not all terribly useful in many cases - just attempting to mention all kinds of tweaks. **Potential easy, big wins** without much configuration and fiddling could be 2, 6, 7, 9, 18. Options 2 and 18 are almost always **time consuming**, but very effective. Maybe hours for option 2 (especially on Windows 7 & 8 - do not abort when it is running) and even longer for option 18 on a large computer or a slow disk (but the operation can be cancelled). ***Option 0, Cloud Storage*** is an **implied overall option** in this day and age. **OneDrive Filer**, **GDisk**, **Dropbox**, etc... Download data files on demand. 1. ***My Documents***: It is generally much better to **move user data folders** to a network location or another, local drive (best) than to redirect system folders! Few system-entanglements. * I wouldn't move the desktop or other folders found here: `HKCU\Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders`, I would move "**My Documents**". Just right-click it in Windows Explorer, go to properties and there is a tab there with features to help you move it. **Careful whilst doing this - a backup is in order first**. * `Pictures` and `Video` **might** also be OK to move, but not the desktop or the other special folders - they may be involved in the boot or logon process (erroneous packages could cause that even for My Documents - nothing is without risk). * Streaming and **media files** from apps such as iTunes or similar can obviously totally hog a disc with limited capacity. I use `SpaceMonger.exe` to get an overview and then move the files somewhere else. * For computers with multiple users there will obviously be multiple "My Documents" folders to redirect. 2. ***Microsoft's Disk Cleanup Tool***: Run `cleanmgr.exe`, select `Clean up system files` as described here: <https://serverfault.com/q/573208/20599> (top). * **UPDATE Oct.2018**: "**Downloads**" folder is now a cleanup option! **DO NOT ENABLE!** It deletes the whole downloads folder without question. **This issue appears corrected by now Oct.2021**. * You can now zap the uninstalls for applied Windows Updates - **this can give you back several gigabytes on your system drive**. In the picture below I can zap 5.36 GB. For Windows 7 I have seen dozens of gigabytes being purgeable. * This tool might also slim down and shrink the `WinSxS directory` (the Win32 side-by-side assembly folder). I am not 100% positive. * Obviously you can remove unnecessary packages in Add / Remove Programs and remove system restore point (use the second tab in the image below to access these features): [![Microsoft's built-in Disk Cleanup Tool](https://i.stack.imgur.com/qBOjt.png)](https://i.stack.imgur.com/qBOjt.png) 3. ***Third Party Cleanup Applications***: Third party tools such as **CCleaner** may be able to clean out even more space by wiping out cache files and temporary files for all kinds of applications and tools. This particular tool [suffered a malware attack recently](https://www.pcworld.com/article/3225407/security/ccleaner-downloads-infected-malware.html). Use at your own risk. * Personal opinion / suggestion: use only for test boxes or non-critical machines. The cleanup is quite awesome, but it also involves some risks (lost login passwords, lost system logs, etc...). Self-evident, but it should probably be mentioned. * **My 2 cents**: not a corporate solution, but may be fine for advanced home users who like to experiment and to keep their machines tuned. 4. ***Administrative Installations***: For large MSI files, performing an administrative install will prevent the caching of the whole MSI file in `C:\Windows\Installer`. You must install from a proper network share so files are available for repair operations. * An administrative installation essentially extracts embedded CAB files from the MSI and allows the creation of a network installation point where all computers can pull files from instead of caching all files locally. * The generic method for running and administrative installation is: `msiexec /a File.msi`. More details in links below. + [Extract MSI from EXE](https://stackoverflow.com/questions/1547809/extract-msi-from-exe/24987512#24987512) + [What is the purpose of administrative installation initiated using msiexec /a?](https://stackoverflow.com/questions/5564619/what-is-the-purpose-of-administrative-installation-initiated-using-msiexec-a/5751980#5751980) * [How can I eliminate the huge, cached MSI files in C:\Windows\Installer?](https://serverfault.com/q/642177/20599) * [There is a whole lot of installer caching going on](https://stackoverflow.com/a/48823086/129130) - it is a little out of hand if you ask me. 5. ***Mounted Drives***: Some guys dabble with **mounting external drives as folders on their system drive**. In other words another drive shows up as a regular folder on your system drive and functions as such ([sample](https://www.youtube.com/watch?v=xohlhpBTkcY)). * This I have no experience with, and I have doubts about its reliability over time. For all I know it might actually be better than several other options if you do it right (and never take out the physical drive). * I would do **data file folders only** (not settings folders, or core OS folders such as the desktop). Maybe for **source control folders**. If the link breaks, the data should still be safe and the system can still boot (and the link re-established). * **UPDATE**: [**Windows Explorer's "Include in library" is an alternative?**](https://superuser.com/a/1018448/11906) (**do have a peek**) I like to create a "**Source Code Library**" with included folders from here and there. 6. ***Visual Studio***: And the obvious cleanup options for Visual Studio (for completeness): * If you have downloaded **MSDN help locally** (`Help => Add and Remove Help Content`, remove items as appropriate and rely on online help instead or change the `Local store path` towards the bottom to use another drive for content). * Or you have **several versions of the SDKs you do not need** or you have **Visual Studio features you do not need**, get rid of them (In Visual Studio: `Tools => Get Tools and Features...` - get rid of unnecessary features - I often use the `Individual Components` view). 7. ***Downloads Folder***: I am sure I have forgotten many viable options to get some more workspace without wrecking your box. One would be to clean out your `Downloads folder` and move all installers to a network location - this might be the biggest save of all for some people. * This also works great for laptops - it is just about the first thing I would do for a laptop with little disc space. If you will not have access to your network share of installers - for example whilst traveling - then just use a thumb drive or external hard drive to hold your installers and ISO files. * For computers with multiple users there will obviously be multiple download folders potentially full of stuff. Use a disk space visualizer to see (see link on top of list). 8. ***Page File***: Some people move the system page file (`pagefile.sys`) from the system drive to another drive. *Back in the day this caused me an **unbootable system**, but perhaps things are better now*. Not the first thing I would do though - this is very core OS-stuff. * Obviously impossible for a laptop with only one drive (unless you erase the recovery partition and create a real, visible partition in its place). * I find this option risky, maybe I should have put it in the "dis-honerable mentions" part below. * Be careful. Maybe the "last-known good"-feature or system restore can help you if you get problems? 9. ***Hibernation File***: the hibernation file on Windows systems will live on the system drive, and [I am not aware of any way to move it anywhere else](https://superuser.com/q/402768/11906) for [very fundamental technical reasons](https://technet.microsoft.com/en-us/library/2007.11.windowsconfidential.aspx). However, [you can disable hibernation to get rid of the whole file](https://support.microsoft.com/en-us/help/920730/how-to-disable-and-re-enable-hibernation-on-a-computer-that-is-running). This will free up a few gigabytes on a modern computer. * You obviously lose the ability to put your machine into hibernation (memory dumped to disk), but sleep mode (low-power use mode / standby) should still be available. * Hibernation mode may be more desirable to keep on for laptops (if battery runs out whilst traveling the laptop can not auto-hibernate and you could lose data). 10. ***Application Temp & Cache Folders***: The above mentioned `CCleaner` can wipe out a lot of temporary files for various applications (though I don't really recommend this for use - I use `cleanmgr.exe` instead - and CCleaner for test boxes). * **Web Browsers** (Firefox, Opera, Vivaldi, Chrome, IE, Edge, Safari, etc...) can also spam the disk with a lot of cache files and downloaded junk. It is possible to redirect all these folders, though I prefer to reduce them to a certain acceptable maximum size. * Plenty of other applications, of all kinds, leave trash on the system over time. Some of which can be cleaned with CCleaner mentioned above (or another such tool). Again not a tool recommendation. Use the cleanup features inside the application itself if available. * For computers with multiple users there will obviously be multiple cache folders folders to restrict and clean. 11. ***Special Data-Heavy Applications' Storage Folders***: Some applications can potentially store enormous data files on your system drive (and outside "My Documents") that can be moved to other drives. * The biggest suspect is probably **Outlook** (in older versions at least) - or **other email software** (Thunderbird, Lotus Notes, etc...). For Outlook there is a single \*.PST file storing all email and attachments, or a similar sync file if connected to Exchange. This file can be moved to different drive with relative ease. Some even resort to using the Web-interface only for their email and eliminate the local PST file (good for laptops). * Without going overboard, **MS-SQL databases** could be another type of massive data file that could be moved to a different drive with relative ease. * And this list could be made very big, but diminishing returns to add any more (**web server folders**, **virtual machine images**, media / video files (mentioned above), virtualized applications maybe, etc...). * For computers with multiple users there will obviously be multiple storage locations to redirect. 12. ***Source Control Working Folder & Repository***: for a developer this is 100% self-evident - and almost embarrassing to list, but I just want to have it mentioned. It is also related to the previous point, but I add it as its own bullet point. You move both your working folder and your source code repository (if different, and if local) to a different drive than the system drive. For example **GIT**, Mercurial, Perforce, StarTeam, etc... 13. ***Build Process Junk***: Beyond moving source control folders to other drives, it is also possible that certain processes generate huge log files that spam the system in unexpected locations at times. I hear **MSBuild** tends to [enthusiastically create log files](https://twitter.com/barnson/status/974705853390520321) sprinkled across the system and I am not sure if normal Microsoft cleanup tools detect them (for example `cleanmgr.exe` mentioned above). And your source code could have lots of object files you can zap. 14. ***Visual Studio Code***: silly option, but for *ad-hoc developer laptops* or *traveling tech-workers*, one could potentially rely on the smaller and multi-platform **Visual Studio Code** instead of Visual Studio to do small development testing / work. Significantly smaller install. Personal note: a bit odd the whole tool :-). Also browser version now? * [Visual Studio Code](https://serverfault.com/q/642177/20599) (cross-platform). * [What are the differences between Visual Studio Code and Visual Studio?](https://stackoverflow.com/questions/30527522/what-are-the-differences-between-visual-studio-code-and-visual-studio) * <https://code.visualstudio.com/docs/supporting/faq> * Download: <https://www.visualstudio.com/> 15. ***Windows Store Apps & Per User Installations***: if there are multiple users on the box, several Store apps could be installed multiple times, once per user. Some cleanup could be done here if need be. I suppose some games could be quite big. And in the day and age of side-by-side installation features, we are now to deploy everything per-user? Odd. 16. ***Tweak Each Package Installation***: almost every package you install can be modified slightly during installation to add less files to the system partition. * **Redirect Application Installation Folder**: this is an option I personally dislike, but it is used a lot. For every installation you redirect the installation folder to a different drive and folder hierarchy than the regular `ProgramFilesFolder`. This is done on a per-package basis, and not all packages support this. Typically you go to a "Custom" installation dialog where you perform "feature selection" (what setup features to install). * **Leave Out Optional Features**: most packages you install will have optional components that you can leave out or even run-from-source in the case of some MSI packages. Certain developer tools can often be tweaked quite a bit without too many side-effects. **Large games** are often installed to a regular non-SSD hard drive which is not the system drive. 17. ***Uninstall Windows Components***: a few components can be added / removed from Windows. Click `Turn Windows Features On or Off` from the old-style `Add / Remove Control Panel Applet`. You can turn off / remove certain .NET versions, IE, IIS, Windows Media Player, Message Queue Server, Print to PDF, PowerShell and various other components. Maybe not that much to gain from this (some security benefits perhaps by removing some components - for example support for SMB 1.0 / CIFS file sharing or IIS). 18. ***Enable Compression For System Drive***: you can enable compression on the whole system drive - with some performance penalties - provided the file system is NTFS. Simply `Right-click the system drive => Properties => Compress drive to save disc space`. This can take quite some time (old HD, SSDs are faster). You can also compress individual folders. I like to enable the "Show compressed or encrypted NTFS files in color" option in Windows Explorer. `File Menu => Options => Show => Show compressed or encrypted NTFS files in color`. 19. ***Uninstall Unnecessary Software***: the forgotten obvious option mentioned in item 2 above, you should obviously uninstall any software that is not needed anymore. **Common disk hogs**: `games`, `weird SDKs` and `development tools` installed for testing, `expired trial versions` for various software, etc... To uninstall: `Windows key` + `R`, type `appwiz.cpl` and hit `Enter`. 20. ***User Data Cleanup***: for certain uninstalled applications a lot of junk could be left in the `%UserProfile%` and in the `%AllUsersProfile%`. **Cleanup is as usual risky**, use caution, but there can be lots of junk here - sometimes gigabytes. * Great care must be taken during such cleanup. Zip up the folder first. "Big wins only" - why nitpick with tiny text files? Diminishing returns for real if you get bogged down in these folders. Use disc-space visualization tools to see the hogs. + `%AllUsersProfile%` - shared data + `%UserProfile%` and `%UserProfile%\AppData` - user specific data, remember to clean for all users (if multiple). 21. ***Stray Package Caches***: as mentioned above a lot of caching goes on for MSI packages (and other installer packages). It is likely that a lot of these packages can be left behind after uninstall (this was the case with Installshield cached setups back in the day at least). * The most commonly known caching locations are described here: [Cache locations for (MSI) packages](https://stackoverflow.com/a/48823086/129130). **Clean at your own risk, obviously** - I repeat it, and I mean it. Some gigabytes are commonly stored here. * Paths inline (just a selection, there can be many others): + **WiX**: `%ProgramData%\Package Cache` + **Installshield**: `%SystemRoot%\Downloaded Installations` (older IS setups) and `%LocalAppData%\Downloaded Installations` (newer IS setups) + **Advanced Installer**: `[AppDataFolder][|Manufacturer]\[|ProductName] [|ProductVersion]\install` + **Visual Studio**: `%AllUsersProfile%\Microsoft\VisualStudio\Packages`. **See important tip in comment below** (disable cache). 22. ***Package Distribution Cache Folders***: SCCM and other package distribution systems have cache folders that get really big. For example [ccmcache](https://superuser.com/questions/786288/what-is-in-c-ccmcache). These folders can usually be [cleaned or re-configured](https://www.youtube.com/watch?v=shORSxwo6tQ) to take less space. There are no doubt numerous other little tricks, but ***please don't redirect system folders!*** Alternative Approaches ====================== ***(Dis)-Honorable Mentions***: The below are **not recommendations**, but some alternative approaches. They are higher risk than the options above (which should be good enough), and best if you are setting up a new laptop fresh or reinstalling it, and want to get rid of pesky **vendor recovery-partitions** that you can do without. Let's state the obvious with conviction: ***A lot of data is lost every year using these tools***. So coffee or caffeine first. Glasses on. Look around. Adjust any pony tails and beards (ladies too). Speak to yourself in the third person. Assume a demonstrably insane posture and shout out "**I do!**" to really commit to the imminent disaster! Good luck! Fire in the hole! "Fire for effect". SNAFU. FUBAR. OK, enough already... I have had bad experiences - but no huge disasters (knock on wood) - with all these tools. Enough said - be careful, your data is important. Wife's baby pictures, your uncommitted code, etc... 1. ***diskmgmt.msc*** or **diskpart.exe** (Windows): open partition manager (`diskmgmt.msc`) and wipe out any recovery partitions or hidden partitions that you can live without and then expand your system disk to fill the whole physical disk or create a new visible partition. * Factory reset no longer possible (could be outdated anyway). You need installation media to reinstall (downloadable?). * Careful what you wipe out! **Unrecoverable**. Partitions are often protected and untouchable. They are also unmovable and un-expandable in many cases. * Maybe create a new, visible partition replacing the recovery partition and move data files and your downloads folder there to make more room on your system partition? * If the partitions are protected, you can use [diskpart](https://www.youtube.com/watch?v=4D9WkByh-zU) to delete them instead, or see next bullet point for `gparted`. Very easy to mess things up using `diskpart` though (command line). 2. ***gparted*** (Linux): you may be prevented from deleting a recovery partition from `diskmgmt.msc` (protected partitions). If you are adamant and insist, you can boot into a [Linux Live Disc / System](https://gparted.sourceforge.io/livecd.php) (booted from removable media) and delete using `gparted` for example. * I have done this to get rid of obsolete and useless recovery partitions and / or malware, and it worked just fine. But frankly I trust this `gparted` app as far as I can toss it. No offence to `gparted`, but playing well with Windows is challenging. Backup is crucial and mandatory for such risky endeavors - obviously. * Though risky (a Linux tool is updating the partition tables where your Windows partitions are declared) this may work for laptops where there is nowhere to redirect data folders since there is only one physical disk and you want the full disk for your system partition. * I think `gparted` even allows you to try to resize existing partitions at this point. I have never tried it. Good luck if you try. "Fire in the hole!". 3. ***Cloning***: some use **imaging tools** or **disk cloning** features (hardware) to clone the old disk onto a bigger one. **Backups essential obviously**. Far from my comfort zone - just mentioning it. Not really relevant for this list (which was supposed to be about simple and effective measures to gain more disc space). * I believe there are features for this in `gparted` as well. Never tested. * Various hardware solutions. I gave them up years ago. * Why I am skeptical? Malware. Disk errors. Encryption. NTFS complexity? AD-problems (old & new drive in use post-clone)? Etc... * Several hard drive vendors seem to deliver proprietary solutions for this - these may be better tested than generic approaches? 4. ***File System Allocation Size***: the file system used and its allocation size affects available space. Never bothered to look much at this, but a lot of space can be wasted by allocation size issues: [Would SSD drives benefit from a non-default allocation unit size?](https://serverfault.com/q/7531/20599) * Allocation size cannot be easily / safely changed for a disk in use. There may be tools that can do it, but the benefits are uncertain. * Modern Windows versions require **NTFS** as system partition file system. Other file systems such as **FAT32** or **exFAT** have lower overhead (especially for smaller partitions - there will be more space available), and they are potentially faster but have more limitations. For FAT32 the biggest limitation is probably the **4GB max file size** - not viable today. --- **The rest of this answer (below) was written during debugging** - I will leave it in. It contains generic and general-purpose debugging options. --- VC+ Runtimes ------------ As seen in the link towards the bottom, other people have seen the same deployment error. Before getting into too much debugging, **let's try the simplest approach possible.** Please try to **install the VC++ runtimes** for 2017 (and 2015 perhaps) from here: * [**The latest supported Visual C++ downloads**](https://support.microsoft.com/en-us/help/2977003/the-latest-supported-visual-c-downloads). Potential General Fixes ----------------------- [**This seems to be the better discussion online for this problem**](https://developercommunity.visualstudio.com/content/problem/24553/visual-studio-2017-fails-to-install.html). I would first try the suggestion to run this tool: [**Microsoft Install and Uninstall Troubleshooter**](https://support.microsoft.com/en-us/help/17588/fix-problems-that-block-programs-from-being-installed-or-removed). You can try [**this list of fixes**](https://answers.microsoft.com/en-us/windows/forum/windows_7-windows_programs/errors-2755-and-1336-when-installing-programs-or/5e045f29-4586-4428-91ba-f362a1bc610b) as well. Crucially I would also **try a reboot** before trying again to **release any potential locked files**. Just to wipe the slate clean. The system's event log might have further information on the error seen (sometimes even beyond what is in an `msiexec.exe` log). ACLs ---- What is the ACL (Access Control List) for your TEMP folder on that G: drive? **UPDATE**: Also make sure the hidden folder `C:\Windows\Installer` exists and have the correct permission settings. You need to `show protected operating system files` in Windows Explorer to see this folder. Verbose Logging --------------- **Try to create a proper, verbose log for the MSI install in question** (much more informative than the log you refer to). This gives you something to start with to figure out what is happening. [You can find some information on how to do logging here](https://stackoverflow.com/a/49028367/129130). I would **enable logging for all MSI installations** for debugging purposes. See [installsite.org on logging](http://www.installsite.org/pages/en/msifaq/a/1022.htm) (section "*Globally for all setups on a machine*") for how to do this. I prefer this default logging switched on for **dev and test boxes**. Typically you suddenly see an MSI error and you wish you had a log - now you can, always ready in `%tmp%`. Quick Testing ------------- In your case, I would go to `C:\ProgramData\Microsoft\VisualStudio\Packages\Microsoft.VisualStudio.MinShell.Msi,version=15.6.27421.1\` to see if the MSI package is present on disk, and then I would launch it with logging enabled: ``` msiexec.exe /I "Microsoft.VisualStudio.MinShell.Msi.msi" /QN /L*V "C:\msilog.log" ``` Alternatively I would just double click the MSI file and see if I get a better, interactive error message. You will most likely need the verbose log to get any info. **See link in comment below** (concrete error). Upvotes: 3 [selected_answer]<issue_comment>username_2: Just check `C:\WINDOWS\temp` and `C:\WINDOWS\installer`. Do they exist and are they writable? In my case, I deleted `C:\WINDOWS\installer` previously and forgot about it, so I must recreate it. Upvotes: 1 <issue_comment>username_3: The same error happens if [User Account Control](https://en.wikipedia.org/wiki/User_Account_Control) (UAC) is disabled. The Visual Studio installer can't write anything to TEMP if User Account Control is off. Solution - enable UAC. It was [Visual Studio 2019](https://en.wikipedia.org/wiki/Microsoft_Visual_Studio#2019) and [Windows Server 2012 R2](https://en.wikipedia.org/wiki/Windows_Server_2012_R2) in my case. Upvotes: 0
2018/03/18
943
3,505
<issue_start>username_0: I am using Room with RxJava2 to implement my data layer via Repository Pattern principles. I have the following simple code which decides where to pick data from. ``` @Override public Single getTeamById(int teamId) { return Single. concat(local.getTeamById(teamId), remote.getTeamById(teamId)). filter(team -> team != null). firstOrError(); } ``` The problem here is that instead of going to the remote source , it returns an error from the first source (local) if the data was not available. ``` android.arch.persistence.room.EmptyResultSetException: Query returned empty result set: select * from teams where id = ? ``` How should I instruct the concat to forgo any error that is received and continue its concatenation?<issue_comment>username_1: Aslong you're not sure if you can receive at least one Team from you data provider, you should probably think of using Maybe instead of Single. You can lookup the definition here: [Single](http://reactivex.io/documentation/single.html) as it states: > > it always either emits one value or an error notification > > > Use Maybe instead: [Maybe](https://github.com/ReactiveX/RxJava/wiki/What's-different-in-2.0#maybe) > > there could be 0 or 1 item or an error signalled by some reactive > source > > > As your error already states there seems to be a problem while extracting results from your query. Handle your result extraction correctly, so that you check if there are results before trying extracting any. Therefor the Maybe would either return 0 or 1 item, and not throw any error at all when no Team was found. Upvotes: 2 <issue_comment>username_2: You cannot pass null in RxJava2. So whenever your local repo is empty you just can't return null in your single. There was a question o stack about handling null objects: [Handle null in RxJava2](https://stackoverflow.com/questions/49196095/handle-null-in-rxjava2/49196334#49196334) Also here you can find an article showing you preferred implementation of repository pattern using RxJava2: <https://android.jlelse.eu/rxjava-2-single-concat-sample-for-repository-pattern-1873c456227a> So simplifying - instead of returning null from both local and remote repo pass some sort of "empty" object. That will be useful also in your business logic allowing you to recognize empty set of data. Upvotes: 2 <issue_comment>username_3: If you want to continue when the first source errors (instead of completing as empty), you can use `onErrorResumeNext` instead of `concat` (I assume both get calls return `Observable`, adjust as necessary): ```java return local.getTeamById(teamId) .onErrorResumeNext(error -> { if (error instanceof EmptyResultSetException) { return remote.getTeamById(teamId)); } return Observable.error(error); }) .firstOrError(); ``` Upvotes: 2 [selected_answer]<issue_comment>username_4: I used `Maybe` to solve my `Rxjava2` repository pattern problem. In your case, I would use the following code to sort it out: ``` //you may need to rewrite your local.getTeamById method protected Maybe getTeamById(int teamId) { Team team = localDataHelper.getTeamById(teamId); return team != null ? Maybe.just(team) : Maybe.empty(); } @Override public Single getTeamById(int teamId) { Maybe cacheObservable = local.getTeamById(teamId); Maybe apiCallObservable = remote.getTeamById(teamId).toMaybe(); return Maybe.concat(cacheObservable, apiCallObservable) .toSingle(); } ``` Upvotes: 1
2018/03/18
684
2,823
<issue_start>username_0: I'm writing a simple bare-metal application on an embedded processor. As part of this application it must allocate some memory on the heap of approx 256kB, using malloc. Note: Originally this was statically allocated in main but at a certain size limit we had to move to dynamic allocation (gcc sliently failed). The issue is - when the program runs the first time everything works. However if the processor gets reset during execution the pc returns to the start but the program memory still has the heap as it was at the previous execution. So the next time the program runs malloc fails (as malloc's allocation table from the previous run still exists and it thinks the available heap space is already allocated). So the question is - how do I ensure malloc's table is cleared at the start of main every time I run? Is there a way I can use the location of the heap to zero out some memory (to blank the table) or alternatively get access to the list of pointers malloc has previously returned so I could free them?<issue_comment>username_1: The heap should be setup in the pre main init code. In a bare metal system there is no system call to allocate heap memory to your program. You might try getting the debugger to start at the first loaded instruction instead of main and see where it sets up the heap. It sounds like your system requires the program be reloaded on a reset to initialize the heap. What system are you using? Upvotes: 2 <issue_comment>username_1: There is a topic here for exactly this issue: [Microblaze (no OS) - heap not clean after reset](https://support.xilinx.com/s/question/0D52E00006hpcuDSAQ/microblaze-no-os-heap-not-clean-after-reset) Unfortunately no answer but you are not the first to hit this. I think you have to step through from load in the debugger and see where the init code sets the heap pointer. Upvotes: 2 <issue_comment>username_2: What on earth are you doing? * At CPU reset you can't rely on *anything*, you need to initialize *everything* in the MCU, including stack and RAM setup. This is true for every microcontroller program ever written, bare metal ones in particular. In case you for some reason is using a heap, you need to zero-out the memory before use. * Using malloc on a bare metal embedded system simply does not make sense. Where you had static .data/.bss variables before, you have a static heap segment instead. The heap solves no problem, it just hides the problem under the carpet. The main thing here is: either you have enough memory to cover the worst case, or you don't. Period. [See this](https://electronics.stackexchange.com/questions/171257/realloc-wasting-lots-of-space-in-my-mcu/171581#171581). Or if you are writing a program which is allowed to fail and bug out unexpectedly, you can ignore this. Upvotes: -1
2018/03/18
1,106
4,104
<issue_start>username_0: I am fairly new to java, so I don't have much experience with the syntax, I have tried some tutorials online and have watched a few videos on while and do while loops in Java from a user input. However, every edit i try breaks my code. The program below takes an answer from the user, an integer from 1 to 20, and has if statements, that carry out the different scenarios. However, I am trying to make it so that it will keep asking the user for an answer, until they input 0. Here is a part of relevant code: ``` System.out.print("Type in the pokedex number of the pokemon:"); int answer = Integer.parseInt(reader.nextLine()); if (answer == 1){ System.out.println( "\nPokemon: " + Bulbasaur.getname() + "\nType: " + Bulbasaur.getthing_about() + "\nHealth: " + Bulbasaur.gethp() + "\nAttack: " + Bulbasaur.getattack() + "\nDefense: " + Bulbasaur.getdefense() + "\nSpecial attack: " + Bulbasaur.getspattack() + "\nSpecial defense: " + Bulbasaur.getspdefense()+ "\nSpeed: " + Bulbasaur.getspeed() + "\nTotal: " + Bulbasaur.gettotal()); } ``` . . . There are 19 other if statements similar to this (I know this is inefficient code, but i will be making it efficient if it loops). How would I add a do while/while loop that loops these statements until 0 is entered?<issue_comment>username_1: You need to check `answer` in the loop condition. You can do the check and assignment to answer in one line ``` int answer; while ((answer = Integer.parseInt(reader.nextLine())) != 0) { // code here } ``` Upvotes: 2 <issue_comment>username_2: This is a quite intuitive implementation of what you want: ``` System.out.print("Type in the pokedex number of the pokemon:"); int answer = -1; // Initialize to a trivial value different from 0 while (answer != 0) { // It will not enter here if initialized to 0! answer = Integer.parseInt(reader.nextLine()); if (answer == 1){ // Code from the if statement } // End of if } // End of while ``` As @Coffeehouse said, you should take a look at what an array is, and try to use it appropriately. It will shorten your code by quite a lot. Step by step, though :) Upvotes: 0 <issue_comment>username_3: Your code would be more efficient if you kept the methods like `getName()` and all 'non-static', so that they could be called from objects of the classes. If you've understood how to use `int[]`, `double[]` etc. type of Arrays, what you can do is create an array of objects of the Pokemon like so: ``` Object[] pokemon = {new Bulbasaur(), new Ivysaur(), new Venusaur()}; // etc. etc. int answer = Integer.parseInt(reader.nextLine()); answer = answer - 1; // because arrays start at zero, not one System.out.println("\nPokemon: " + pokemon[answer].getname() + "\nType: " + pokemon[answer].getthing_about() + "\nHealth: " + pokemon[answer].gethp() + "\nAttack: " + pokemon[answer].getattack() + "\nDefense: " + pokemon[answer].getdefense() + "\nSpecial attack: " + pokemon[answer].getspattack() + "\nSpecial defense: " + pokemon[answer].getspdefense()+ "\nSpeed: " + pokemon[answer].getspeed() + "\nTotal: " + pokemon[answer].gettotal()); ``` Here's [a guide to using Objects](https://www.javatpoint.com/java-oops-concepts) if you need it. By making the methods non-static, you can call them from Objects which belong to an array, and all you have to do to add more Pokemon to the array is add `, new WhateverPokemon()` to it.. Also, if you want to print the choices to the user, you can do so like this: ``` for(int i = 0; i < pokemon.length; i++) { System.out.println(i+1+". "+ pokemon[i].getName()); } ``` If you want to add this code, then place it immediately after the `Object[] pokemon ...`. Upvotes: 2 [selected_answer]
2018/03/18
1,096
4,210
<issue_start>username_0: I have the ff class: ``` namespace App\Component\Notification\RealTimeNotification; use App\Component\Notification\NotificationInterface; class EmailNotification implements NotificationInterface { private $logNotification; private $mailer; private $engine; // This will appear on From field on Email. private $mailerFrom; public function __construct(LogNotification $logNotification, \Swift_Mailer $mailer, \Twig_Environment $twig, string $from) { $this->logNotification = $logNotification; $this->mailer = $mailer; $this->twig = $twig; $this->mailerFrom = $mailerFrom; } public function send(array $options): void { // Resolve options $this->resolveOptions($options); $sendTo = $options['sendTo']; $subject = $options['subject']; $template = $options['template']; $data = $options['data']; $body = $this->createTemplate($template, $data); $this->sendEmail($sendTo, $subject, $body); } protected function sendEmail($sendTo, $subject, $body): void { dump($this->mailerFrom); $message = (new \Swift_Message()) ->setSubject($subject) ->setFrom($this->mailerFrom) ->setTo($sendTo) ->setBody($body, 'text/html') ; $this->mailer->send($message); } protected function createTemplate($template, $data): string { return $this->twig->render($template, $data); } protected function resolveOptions(array $options): void { } protected function createLog(array $email): void { $message = 'Email has been sent to: ' . $email; $this->logNotification->send([ 'message' => $message, ]); } } ``` I tried to manually wire all the arguments with the following: ``` # Notification app.log_notification: class: App\Component\Notification\RealTimeNotification\LogNotification app.email_notification: class: App\Component\Notification\RealTimeNotification\EmailNotification decorates: app.log_notification decoration_inner_name: app.log_notification.inner arguments: $logNotification: '@app.log_notification.inner' $mailer: '@mailer' $twig: '@twig' $from: '%mailer_from%' ``` However, when I run the app it throws the exception: > > Cannot autowire service > "App\Component\Notification\RealTimeNotification\EmailNotification": > argument "$from" of method "\_\_construct()" must have a type-hint or be > given a value explicitly > > > Why is this event happening? Thanks!<issue_comment>username_1: Autowiring only works when your argument is an object. But if you have a scalar argument (e.g. a string), this cannot be autowired: Symfony will throw a clear exception. You should [Manually Wiring Arguments](https://symfony.com/doc/current/service_container.html#services-manually-wire-args) and explicitly configure the service, as example: ``` # config/services.yaml services: # ... # same as before App\: resource: '../src/*' exclude: '../src/{Entity,Migrations,Tests}' # explicitly configure the service App\Updates\SiteUpdateManager: arguments: $adminEmail: '<EMAIL>' ``` > > Thanks to this, the container will pass <EMAIL> to the > $adminEmail argument of \_\_construct when creating the > SiteUpdateManager service. The other arguments will still be > autowired. > > > Hope this help Upvotes: 0 <issue_comment>username_2: Answer by @username_1 is great! You can even drop service definition and delegate to [parameter binding](https://symfony.com/blog/new-in-symfony-3-4-local-service-binding) **since Smyfony 3.4+/2018+**: ``` # config/services.yaml services: _defaults: bind: $adminEmail: '<EMAIL>' # same as before App\: resource: '../src/*' ``` Do you want more example and logic behind it? Find it here: <https://www.tomasvotruba.cz/blog/2018/01/22/how-to-get-parameter-in-symfony-controller-the-clean-way/#change-the-config> Upvotes: 2
2018/03/18
820
2,737
<issue_start>username_0: I hope this question makes sense because I'm not too sure how to ask it. But my program - in python - asks the user to input their score from 1-10 on these ice cream flavours in an array. Displays their score and then prints their highest score as their favourite flavour. Which takes the number of index and prints that flavour from the array. However, let's say if the user put Mint Choc Chip and Strawberry both as 10. The program will only print the item that is in the array first, which is min choc chip despite strawberry also being the highest score. Does anyone know a way which can make the program display all the highest scored flavours? Please keep in mind that I am new to Python so if the answer seems obvious to you, it is not for me. So please be kind and any help or suggestions will be greatly appreciated! **I have tried** to add this: *if high > 1: print ("\nYour favourite flavour of ice cream is", flavours[high], flavours[high])* But just prints the same flavour that appears in the array twice, so it will print: Your favourite flavour of ice cream is mint choc chip mint choc chip. And I know this doesn't make sense because if the highest score was three flavours it will only print two (of the same). I have also tried to look for other functions such as import etc. But I couldn't find one that helps. This is not necessary for the program but will make it better and more realistic. Thank you! code: ``` import numpy as np flavours = ["Vanilla", "Chocolate", "Mint Choc Chip", "Rosewater", "Strawberry", "Mango", "Frutti Tutti", "Fudge Brownie", "Bubblegum", "Stracciatella"] Ice_Cream= [0]*10 print("Please input your score on these flavours of ice cream. With 1 being the lowest and 10 the highest.\n") for i in range(0,10): print(flavours[i]+":") Ice_Cream[i] = int(input()) print("\nResults:\n") for i in range(0,10): print(flavours[i]+":",Ice_Cream[i]) high = np.argmax(Ice_Cream) if high > 1: print ("\nYour favourite flavour of ice cream is", flavours[high], flavours[high]) else: print ("\nYour favourite flavour of ice cream is", flavours[high]) ```<issue_comment>username_1: Numpy.argmax returns an *array* of indices of maximum values <https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.argmax.html>. You should iterate through it ``` for i in np.nditer(high): print ("\nYour favourite flavour of ice cream is", flavours[i]) ``` Upvotes: 1 <issue_comment>username_2: ``` high=[i for i, x in enumerate(Ice_Cream) if x == max(Ice_Cream)] for i in high: print ("Your favourite flavour of ice cream is", flavours[i]) ``` variable high stores all the indices of max values of icecreams Upvotes: 1 [selected_answer]
2018/03/18
485
1,646
<issue_start>username_0: I have been using the python client API of ML engine to create training jobs of some canned estimators. What I'm not able to do is get the path of the saved\_model.pb on GCS because the path it is stored in has a timestamp as a dir name. Is there anyway I can get this using a regular expression or something on python client, so that I'll be able to deploy the model with correct path. The path seems to be in this format right now - > > **gs://bucket\_name/outputs/export/serv/`timestamp`/saved\_model.pb** > > > --- UPDATE Thanks [shahin](https://stackoverflow.com/users/9388302/shahin-vnia) for the answer. So I wrote this, which gives me the exact path that I can pass to the deploy\_uri for ml engine. ``` from google.cloud import storage def getGCSPath(prefix): storage_client = storage.Client() bucket = storage_client.get_bucket(bucket_name) mlist = bucket.list_blobs(prefix=prefix) for line in mlist: if 'saved_model.pb' in line.name: return line.name[:-14] # print getGCSPath('output/export/serv/') ```<issue_comment>username_1: Use gsutil and tail: ``` MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/outputs/export/serv | tail -1) gcloud ml-engine models create ${MODEL_NAME} --regions $REGION gcloud ml-engine versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version $TFVERSION ``` Upvotes: 1 <issue_comment>username_2: ``` import os import cloudstorage as gcs bucket = os.environ.get('BUCKET') page_size = 1 stats = gcs.listbucket(bucket + '/outputs/export/serv', max_keys=page_size) ``` Upvotes: 1 [selected_answer]
2018/03/18
737
2,882
<issue_start>username_0: So here is the function that works fine: ``` void Insert(node ** root, int inpdata){//tree's root should be passed here if(*root == NULL){ *root = createNode(inpdata); } else if(inpdata < (*root)->data){ Insert(&(*root)->left,inpdata); } else{ Insert(&(*root)->right,inpdata); } } ``` But I could not figure out why we have to use a double pointer. Why wouldn't the following code work for instance: ``` void Insert(node * root, int inpdata){//tree's root should be passed here if(root == NULL){ root = createNode(inpdata); } else if(inpdata < root->data){ Insert(root->left,inpdata); } else{ Insert(root->right,inpdata); } } ``` Also, in the first function, I could not comprehend the usage of `&(*root)`.Wouldn't that make no sense because `*root` itself is a pointer. So, the "address of a pointer" is redundant since pointer already stores an adress value. I might be a bit confused so a little help would be much appreciated. Thanks!<issue_comment>username_1: C passes argument by value, if you use then second approach: ``` void Insert(node * root, int inpdata); ``` After calling this function, the `root` in the caller side won't be affected. > > I could not comprehend the usage of &(\*root) > > > You are confused by [precedence](http://en.cppreference.com/w/c/language/operator_precedence) and wrongly parse the expression. `&(*root)->left` is ``` &((*root)->left). ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: First of all, thank you for asking this question. As I was also doing the implementation of BT(BINARY TREE ).The answer of your question lies in the full part of the program. If you declare the root / head pointer externally then you don't need to use double pointer. But iIf you declare the root pointer inside the main function. Then we come up with 2 possibilities: As you know we declare root as NULL in the starting point inside the main 1. When you call the insert method and you put the parameters as root and data then you will collect in the insert function as local root pointer variable. Even if you change the links of root `local` that doesn't create any effect on the (main) root, so that's why you have to declare double pointer in the insert function. Then if you do some compute with root then it will go for (real)main, and if you are using double pointer then by recursion you'll send an address to double pointer so you have to send the address of a pointer though it will effect the actual pointer that you called 2. Next approach is either you can return the address of root at every calling of the insert method and collect it inside the root and compute everything normal like as in recursive doubly linked list inside insert method EG:. `root=insert(root,data);` Upvotes: -1
2018/03/18
1,275
3,918
<issue_start>username_0: If I send a batch of messages to a Topic, and read messages using a Subscription client, then I seem to receive messages *sequentially*, i.e. OnMessageAsync is fired for each message sent, however there is a noticeable (150+ millisecond) delay between each receive-event **Sender**: ``` var factory = MessagingFactory.CreateFromConnectionString("blah"); sender = factory.CreateMessageSender("MyTopicName"); var tasks = new List(); for (int i = 0; i < 10; i++) tasks.Add(sender.SendAsync(new BrokeredMessage("My Message")) .ContinueWith(t => Log("Sent Message {i}")); await Task.WhenAll(tasks); // This completes within a few millis ``` **Receiver**: ``` receiver = factory.CreateSubscriptionClient("MyTopicName", "MySubscription"); _sbClient.OnMessageAsync(async message => { var msg = message.GetBody(); Log("Received message xxxx await message.CompleteAsync(); }); ``` This means that the 10th message sent is only received more than 1.5 seconds after it was sent. An Azure latency test shows about a 200ms latency to the datacenter I'm using, so I'm not expecting messages to come back before that (and indeed the first message is received shortly after this), however I wouldn't expect the 'cumulative' behavior I'm seeing. Playing around with MaxConcurrentCalls and adding a delay in the OnMessageAsync, shows this working as expected, and I can see only MaxConcurrentCalls being processed at a time I've messed around with **DeleteOnReceive** modes, enabling '**Express**', disabling '**Partitioning**', using **AMQP** rather than SBMP etc., however nothing really seems to make much difference. [I'm Using Microsoft.ServiceBus, Version=3.0.0.0] **EDIT:** Here's what the log looks like. So if I send 10 messages at the same time, I'll only receive the 10th message 1.5 seconds after I sent it: > > 18:09:32.624 Sent message 0 > > 18:09:32.624 Sent message 1 > > 18:09:32.641 Sent message 2 > > 18:09:32.641 Sent message 3 > > 18:09:32.674 Sent message 4 > > 18:09:32.674 Sent message 5 > > 18:09:32.709 Sent message 6 > > 18:09:32.709 Sent message 7 > > 18:09:32.738 Sent message 8 > > 18:09:32.738 Sent message 9 > > > 18:09:32.791 Received message 1 in 341 millis > > 18:09:32.950 Received message 2 in 487 millis > > 18:09:33.108 Received message 3 in 628 millis > > 18:09:33.265 Received message 4 in 770 millis > > 18:09:33.426 Received message 5 in 914 millis > > 18:09:33.586 Received message 6 in 1060 millis > > 18:09:33.745 Received message 7 in 1202 millis > > 18:09:33.906 Received message 8 in 1347 millis > > 18:09:34.065 Received message 9 in 1492 millis > > ><issue_comment>username_1: Basically you're processing messages *much* faster than Service Bus can deliver new ones. Azure SB is relatively slow on an individual-message basis. Verify this by adding a `Task.Delay` before completion and log the thread IDs, and you should see multiple copies spin up. Upvotes: 0 <issue_comment>username_2: After a bit of digging into how exactly the **OnMessage** message pump worked I realised that this is actually a *polling* mechanism, where the underlying call to ServiceBus is still a 'Receive()' that attempts to **pull** any new message(s). If that times out, the call is done again ad infinitum. The behaviour I was seeing then made sense if that call to Receive() **only returned a single message**, and then required a150ms roundtrip to retrieve the next one etc. Enter the **[PrefetchCount](https://learn.microsoft.com/en-us/dotnet/api/microsoft.azure.servicebus.subscriptionclient.prefetchcount?view=azure-dotnet)**. Setting this to a nonzero value on the SubscriptionClient effectively permits the underlying Receive() to pull down multiple messages, that are then cached and made (immediately) available for bubbling into OnMessage. Upvotes: 3 [selected_answer]
2018/03/18
3,519
12,743
<issue_start>username_0: im new on this of unit tests, so im trying to code the unit test for my service, right now i have this class ``` package com.praxis.topics.service; import com.praxis.topics.exception.EntityNotFoundException; import com.praxis.topics.model.Topic; import com.praxis.topics.model.enums.Status; import com.praxis.topics.repository.TopicRepository; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service; import java.time.LocalDateTime; import java.util.List; @Service("topics") public class TopicServiceImpl implements TopicService { private final TopicRepository topicRepository; @Autowired public TopicServiceImpl(TopicRepository topicRepository) { this.topicRepository = topicRepository; } @Override public List getTopicsByStatus(int status) { // Check if status exists in enum if (status < 0 || Status.values().length - 1 < status) throw new ArrayIndexOutOfBoundsException(status); return topicRepository.findTopicByStatus(Status.values()[status]); } @Override public Topic addTopic(Topic topic) { //topic.setCreatedAt(LocalDateTime.now()); return topicRepository.save(topic); } @Override public Topic updateTopic(Topic topic) throws EntityNotFoundException { if (topicRepository.findTopicById(topic.getId()) == null) throw new EntityNotFoundException(String.format("Topic with id: %s was not found", topic.getId())); return topicRepository.save(topic); } @Override public Topic getTopicById(String id) { return topicRepository.findTopicById(id); } @Override public void deleteTopicById(String id) { topicRepository.deleteTopicById(id); } @Override public List getTopicsByName(String name) { return topicRepository.findAllByNameContains(name); } @Override public Topic getTopicByName(String name) { return topicRepository.findTopicByName(name); } @Override public ListgetAllTopics(){ return topicRepository.findAll(); } ``` } the model of topics is this ``` package com.praxis.topics.model; import com.praxis.topics.model.enums.Status; import org.hibernate.validator.constraints.NotEmpty; import org.springframework.data.annotation.Id; import org.springframework.data.mongodb.core.mapping.Document; import org.springframework.format.annotation.DateTimeFormat; import java.time.LocalDateTime; @Document(collection = "Topic") public class Topic { @Id private String id; @NotEmpty(message = "The name is required") private String name; @NotEmpty(message = "The description is required") private String description; private Status status; private String chat; private int teachers; private int students; @DateTimeFormat(pattern = "dd-MM-yyyy hh:mm:ss") private LocalDateTime createdAt; @DateTimeFormat(pattern = "dd-MM-yyyy hh:mm:ss") private LocalDateTime openedAt; @DateTimeFormat(pattern = "dd-MM-yyyy hh:mm:ss") private LocalDateTime closedAt; public Topic() {} public Topic(String name, String description) { this.name = name; this.description = description; this.createdAt = LocalDateTime.now(); } ``` and im trying to do this test for but it throws me a null pointer exception, like if the service variable wasnt initialized or something ``` package com.praxis.topics.service; import com.praxis.topics.model.Topic; import com.praxis.topics.repository.TopicRepository; import org.springframework.beans.factory.annotation.Autowired; import static org.junit.jupiter.api.Assertions.*; class TopicServiceImplTest { @Autowired TopicServiceImpl service; @org.junit.jupiter.api.Test void addTopic() { Topic topic = new Topic("java", "test"); assertEquals(topic, service.addTopic(topic)); } } ``` and this are the exceptions i get ``` java.lang.NullPointerException at com.praxis.topics.service.TopicServiceImplTest.addTopic(TopicServiceImplTest.java:29) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:436) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:115) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:170) at org.junit.jupiter.engine.execution.ThrowableCollector.execute(ThrowableCollector.java:40) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:166) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:113) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:58) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor$NodeExecutor.lambda$executeRecursively$3(HierarchicalTestExecutor.java:112) at org.junit.platform.engine.support.hierarchical.SingleTestExecutor.executeSafely(SingleTestExecutor.java:66) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor$NodeExecutor.executeRecursively(HierarchicalTestExecutor.java:108) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor$NodeExecutor.execute(HierarchicalTestExecutor.java:79) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor$NodeExecutor.lambda$executeRecursively$2(HierarchicalTestExecutor.java:120) at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184) at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:177) at java.base/java.util.Iterator.forEachRemaining(Iterator.java:133) at java.base/java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801) at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151) at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174) at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:430) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor$NodeExecutor.lambda$executeRecursively$3(HierarchicalTestExecutor.java:120) at org.junit.platform.engine.support.hierarchical.SingleTestExecutor.executeSafely(SingleTestExecutor.java:66) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor$NodeExecutor.executeRecursively(HierarchicalTestExecutor.java:108) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor$NodeExecutor.execute(HierarchicalTestExecutor.java:79) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor$NodeExecutor.lambda$executeRecursively$2(HierarchicalTestExecutor.java:120) at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184) at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:177) at java.base/java.util.Iterator.forEachRemaining(Iterator.java:133) at java.base/java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801) at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151) at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174) at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:430) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor$NodeExecutor.lambda$executeRecursively$3(HierarchicalTestExecutor.java:120) at org.junit.platform.engine.support.hierarchical.SingleTestExecutor.executeSafely(SingleTestExecutor.java:66) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor$NodeExecutor.executeRecursively(HierarchicalTestExecutor.java:108) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor$NodeExecutor.execute(HierarchicalTestExecutor.java:79) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:55) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:43) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:170) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:154) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:90) at com.intellij.junit5.JUnit5IdeaTestRunner.startRunnerWithArgs(JUnit5IdeaTestRunner.java:65) at com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47) at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242) at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70) ``` any idea? i think that its because some of the variable dont get initialized, but im doing adding the @Autowired on those so i dont know what it is this is my pom file http://maven.apache.org/xsd/maven-4.0.0.xsd"> 4.0.0 ``` com.praxis topics 0.0.1-SNAPSHOT jar topics Project for Integrador 2018 org.springframework.boot spring-boot-starter-parent 1.5.10.RELEASE UTF-8 UTF-8 1.8 org.springframework.boot spring-boot-starter-data-mongodb org.springframework.boot spring-boot-starter-data-rest org.springframework.boot spring-boot-starter-test test org.springframework.restdocs spring-restdocs-mockmvc test com.fasterxml.jackson.datatype jackson-datatype-jsr310 ${jackson.version} org.junit.jupiter junit-jupiter-api RELEASE org.springframework.boot spring-boot-test org.springframework spring-test RELEASE junit junit org.assertj assertj-core org.springframework.boot spring-boot-maven-plugin ```<issue_comment>username_1: Your TopicServiceImplTest class is not spring component so @Autowired won't work. Spring Boot test should be annotated with @SpringBootTest and @RunWith(SpringRunner.class). Optionaly you could use other annotations to define your context to suit your specific needs. You can find basic information here: <https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-testing.html> Upvotes: -1 <issue_comment>username_2: For starters, if you're using JUnit Jupiter (the programming model in *JUnit 5*), you'll have to configure the Jupiter Test Engine in your build. I recommend you use the official [spring-boot-sample-junit-jupiter](https://github.com/spring-projects/spring-boot/tree/master/spring-boot-samples/spring-boot-sample-junit-jupiter) sample project as a template. The [pom.xml](https://github.com/spring-projects/spring-boot/blob/master/spring-boot-samples/spring-boot-sample-junit-jupiter/pom.xml) shows how to configure the Jupiter Test Engine with the Maven Surefire Plugin: ```xml org.springframework.boot spring-boot-maven-plugin org.apache.maven.plugins maven-surefire-plugin org.junit.platform junit-platform-surefire-provider ${junit-platform.version} ``` Note that it also shows that you need dependency declarations for `junit-jupiter-api` **and** `junit-jupiter-engine`. And [`SampleJunitJupiterApplicationTests`](https://github.com/spring-projects/spring-boot/blob/master/spring-boot-samples/spring-boot-sample-junit-jupiter/src/test/java/sample/SampleJunitJupiterApplicationTests.java) shows how to configure Spring Boot's testing support with the `SpringExtension` for JUnit Jupiter: ```java @ExtendWith(SpringExtension.class) @SpringBootTest class TopicServiceImplTest { @Autowired TopicService service; // .. } ``` Upvotes: 2 <issue_comment>username_3: We had recently started using **Spring 4.3, JUnit5, Maven** together as well and got NullPointerException in integration tests where `@Autowired` was used. Solution for us was to upgrade maven-related test dependencies: ``` org.apache.maven.plugins maven-failsafe-plugin 3.0.0-M4 // upgraded from 2.19.1 org.apache.maven.plugins maven-surefire-plugin 3.0.0-M4 // upgraded from 2.19 ``` Upvotes: 1
2018/03/18
785
2,410
<issue_start>username_0: How do i set a maximum number so that a certain variable cannot exceed it's limit in my code, the player can repeatedly go to heal therefore their health becomes more than their maximum HP ``` public class HEAL { public static int maxhp = 25; public static int hp = 10; public static void main(String[] args) throws IOException { heal(); } public static void heal() throws IOException { BufferedReader br = new BufferedReader(new InputStreamReader(System.in)); System.out.println("Would you like me to heal you? \n[1] Yes\n[2]No"); int choice = Integer.parseInt(br.readLine()); if (choice == 1) { System.out.println("You have been healed for 10 hp"); HEAL.hp = HEAL.hp + 10; System.out.println("Your current HP: "+HEAL.hp); } else if (choice == 2) { System.out.println("Leave"); } } } ```<issue_comment>username_1: What I would do is create a function setHp, which would check if it is within the constraints. ``` if (val < 0 || val > 100) { /* not within constraints, do what you want here */ } else { hp = val; } ``` Upvotes: 1 <issue_comment>username_2: Try this (added `if` to check if `hp` exceeds 25) : ``` HEAL.hp=HEAL.hp+10; if (HEAL.hp>25) //add this in your code HEAL.hp=25; System.out.println("Your current HP: "+HEAL.hp); ``` > > Set value of variable `hp` as 25 if it exceeds 25 (since 25 is the > maximum limit). > > > Upvotes: 0 <issue_comment>username_3: Use `Math.min`: ``` HEAL.hp = Math.min(HEAL.hp + 10, 25) ``` Where `25` is the maximum health. Upvotes: 0 <issue_comment>username_4: First, make the variable hp private and 2nd, handle internally in the object if and when to increment it. This [1] ``` private static int hp=10; ``` [2] ``` public incrementHealPoints(int _hp){ if(this.hp < HEAL.maxhp){ this.hp += _hp; }else{ // just ignore? maybe raise an error or log this? } } ``` Upvotes: 1 <issue_comment>username_5: I added some explanations in the comments to the proper lines: ``` if((HEAL.hp + 10) >= HEAL.maxhp) { // If HEAL.hp after adding 10 would be equal or greater than HEAL.maxhp... HEAL.hp = HEAL.maxhp; // Assign HEAL.maxhp value (it's 25 in your example) to HEAL.hp } else { // Otherwise... HEAL.hp = HEAL + 10; // Add 10 to HEAL.hp } ``` Upvotes: 0
2018/03/18
2,282
6,795
<issue_start>username_0: I would like my prompt to show a cross (✘) when the previous command fails. I use the following code: ``` export PROMPT=$'%(?..✘\n)\n› ' ``` This gives me the following output: ``` › echo Hello Hello › asjdfiasdf zsh: command not found: asjdfiasdf ✘ › ✘ ``` I would like to modify the prompt so that it does not repeat the cross when the prompt is redrawn after `Enter` (the third case in the example above). Is it possible?<issue_comment>username_1: I think I got it. Let me know if you find a bug... ``` preexec() { preexec_called=1 } precmd() { if [ "$?" != 0 ] && [ "$preexec_called" = 1 ] then echo ✘; unset preexec_called; fi } PROMPT=$'\n› ' ``` Result: ``` › ofaoisfsaoifoisafas zsh: command not found: ofaoisfsaoifoisafas ✘ › › echo $? # (not overwritten) 127 ``` Upvotes: 5 [selected_answer]<issue_comment>username_2: I do this in my zsh, though with colors rather than unicode characters. It's the same principle. First, I set up my colors, ensuring that they are only used when they are supported: ```bash case $TERM in ( rxvt* | vt100* | xterm* | linux | dtterm* | screen ) function PSC() { echo -n "%{\e[${*}m%}"; } # insert color-specifying chars ERR="%(0?,`PSC '0;32'`,`PSC '1;31'`)" # if last cmd!=err, hash=green, else red ;; ( * ) function PSC() { true; } # no color support? no problem! ERR= ;; esac ``` Next, I set up a magic enter function (thanks to [this post about an empty command](https://superuser.com/questions/625652/alias-empty-command-in-terminal/625663#625663) (ignore the question, see how I adapt it here): ```bash function magic-enter() { # from https://superuser.com/a/625663 if [[ -n $BUFFER ]] then unset Z_EMPTY_CMD # Enter was pressed on an empty line else Z_EMPTY_CMD=1 # The line was NOT empty when Enter was pressed fi zle accept-line # still perform the standard binding for Enter } zle -N magic-enter # define magic-enter as a widget bindkey "^M" magic-enter # Backup: use ^J ``` Now it's time to interpret capture the command and use its return code to set the prompt color: ```bash setopt prompt_subst # allow variable substitution function preexec() { # just after cmd has been read, right before execution Z_LAST_CMD="$1" # since $_ is unreliable in the prompt #Z_LAST_CMD="${1[(wr)^(*=*|sudo|-*)]}" # avoid sudo prefix & options Z_LAST_CMD_START="$(print -Pn '%D{%s.%.}')" Z_LAST_CMD_START="${Z_LAST_CMD_START%.}" # zsh <= 5.1.1 makes %. a literal dot Z_LAST_CMD_START="${Z_LAST_CMD_START%[%]}" # zsh <= 4.3.11 makes %. literal } function precmd() { # just before the prompt is rendered local Z_LAST_RETVAL=$? # $? only works on the first line here Z_PROMPT_EPOCH="$(print -Pn '%D{%s.%.}')" # nanoseconds, like date +%s.%N Z_PROMPT_EPOCH="${Z_PROMPT_EPOCH%.}" # zsh <= 5.1.1 makes %. a literal dot Z_PROMPT_EPOCH="${Z_PROMPT_EPOCH%[%]}" # zsh <= 4.3.11 makes %. a literal %. if [ -n "$Z_LAST_CMD_START" ]; then Z_LAST_CMD_ELAPSED="$(( $Z_PROMPT_EPOCH - $Z_LAST_CMD_START ))" Z_LAST_CMD_ELAPSED="$(printf %.3f "$Z_LAST_CMD_ELAPSED")s" else Z_LAST_CMD_ELAPSED="unknown time" fi # full line for error if we JUST got one (not after hitting ) if [ -z "$Z\_EMPTY\_CMD" ] && [ $Z\_LAST\_RETVAL != 0 ]; then N=$'\n' # set $N to a literal line break LERR="$N$(PSC '1;0')[$(PSC '1;31')%D{%Y/%m/%d %T}$(PSC '1;0')]" LERR="$LERR$(PSC '0;0') code $(PSC '1;31')$Z\_LAST\_RETVAL" LERR="$LERR$(PSC '0;0') returned by last command" LERR="$LERR (run in \$Z\_LAST\_CMD\_ELAPSED):$N" LERR="$LERR$(PSC '1;31')\$Z\_LAST\_CMD$(PSC '0;0')$N$N" print -PR "$LERR" fi } ``` Finally, set the prompt: ```bash PROMPT="$(PSC '0;33')[$(PSC '0;32')%n@%m$(PSC '0;33') %~$PR]$ERR%#$(PSC '0;0') " ``` Here's how it looks: [![screenshot](https://i.stack.imgur.com/8NjxK.png)](https://i.stack.imgur.com/8NjxK.png) A more direct answer to the question, adapted from the above: ```bash function magic-enter() { # from https://superuser.com/a/625663 if [[ -n $BUFFER ]] then unset Z_EMPTY_CMD # Enter was pressed on an empty line else Z_EMPTY_CMD=1 # The line was NOT empty when Enter was pressed fi zle accept-line # still perform the standard binding for Enter } zle -N magic-enter # define magic-enter as a widget bindkey "^M" magic-enter # Backup: use ^J function precmd() { # just before the prompt is rendered local Z_LAST_RETVAL=$? # $? only works on the first line here # full line for error if we JUST got one (not after hitting ) if [ -z "$Z\_EMPTY\_CMD" ] && [ $Z\_LAST\_RETVAL != 0 ]; then echo '✘' fi } PROMPT=$'\n› ' ``` With screen shot: [![simpler screenshot](https://i.stack.imgur.com/dMe7j.png)](https://i.stack.imgur.com/dMe7j.png) Upvotes: 3 <issue_comment>username_3: Use the prexec and precmd hooks: The preexec hook is called before any command executes. It isn't called when no command is executed. For example, if you press enter at an empty prompt, or a prompt that is only whitespace, it won't be called. A call to this hook signals that a command has been run. The precmd hook is called before the prompt will be displayed to collect the next command. Before printing the prompt you can print out the exit status. In here we can check if a command was just executed, and if there's a status code we want to display. This is very similar to the solution suggested by @username_1, which is also a great solution. It's worth using the hooks though so that if you've got anything else registering for these hooks they can do so too. ``` # print exit code once after last command output function track-exec-command() { zsh_exec_command=1 } function print-exit-code() { local -i code=$? (( code == 0 )) && return (( zsh_exec_command != 1 )) && return unset zsh_exec_command print -rC1 -- ''${(%):-"%F{160}✘ exit status $code%f"}'' } autoload -Uz add-zsh-hook add-zsh-hook preexec track-exec-command add-zsh-hook precmd print-exit-code ``` Upvotes: 1 <issue_comment>username_4: Thanks to everyone for their answers. Four years later, I would like to illustrate a variation on username_1's answer for those looking for the error code and an alert without a symbol. This is a minimalist prompt but when an error occurs it displays the error code and > in red following the top level directory. [![](https://i.stack.imgur.com/zxYwM.png)](https://i.stack.imgur.com/zxYwM.png) ``` preexec() { preexec_called=1 } precmd() { if [ "$?" != 0 ] && [ "$preexec_called" = 1 ]; then unset preexec_called PROMPT='%B%F{blue}%1~%f%b%F{red} $? > %F{black}' else PROMPT='%B%F{blue}%1~%f%b%F{blue} > %F{black}' fi } ``` Upvotes: 0
2018/03/18
792
3,242
<issue_start>username_0: I am using [firebaseUI for authentication](https://github.com/firebase/FirebaseUI-Android). it essentially opens a a external activity and logs the user into firebase and sends the developer a call back in onActivityResult. It works great the problem is i need to know if the user is a new signup or an existing user. is there any kind of metadata or something i can use to know this ? here is what i have so far IN JAVA ANDROID: ``` private void ititFireBaseUi() { AuthUI.getInstance() .signOut(getActivity()) .addOnCompleteListener(new OnCompleteListener() { public void onComplete(@NonNull Task task) { // Choose authentication providers List providers = Arrays.asList( new AuthUI.IdpConfig.Builder(AuthUI.EMAIL\_PROVIDER).build(), new AuthUI.IdpConfig.Builder(AuthUI.PHONE\_VERIFICATION\_PROVIDER).build(), new AuthUI.IdpConfig.Builder(AuthUI.GOOGLE\_PROVIDER).build(), new AuthUI.IdpConfig.Builder(AuthUI.FACEBOOK\_PROVIDER).build()); //new AuthUI.IdpConfig.Builder(AuthUI.TWITTER\_PROVIDER).build()); // Create and launch sign-in intent startActivityForResult( AuthUI.getInstance() .createSignInIntentBuilder() .setAvailableProviders(providers) .setLogo(R.drawable.logo) .build(), RC\_SIGN\_IN); } }); } ``` and then for the result: ``` @Override public void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); if (requestCode == RC_SIGN_IN) { IdpResponse response = IdpResponse.fromResultIntent(data); //I WOULD LIKE TO KNOW HERE IF THE USER IS A NEW USER OR EXISTING USER String msg = ""; if (resultCode == RESULT_OK) { // Successfully signed in FirebaseUser user = FirebaseAuth.getInstance().getCurrentUser(); msg = "generating token with email:" + user.getEmail(); Timber.d(msg); Toast.makeText(getActivity(), msg, Toast.LENGTH_LONG).show(); presenter.generateTokenWithFireBase(user); // ... } else { // Sign in failed, check response for error code } } } } ``` I see a [meta class](https://firebase.google.com/docs/reference/android/com/google/firebase/auth/FirebaseUserMetadata) that maybe can help me but i dont know how to use it. gradle : implementation 'com.firebaseui:firebase-ui-auth:3.2.2'<issue_comment>username_1: ``` public boolean isNewSignUp(){ FirebaseUserMetadata metadata = mAuth.getCurrentUser().getMetadata(); return metadata.getCreationTimestamp() == metadata.getLastSignInTimestamp(); } ``` At the time of writing, Looks like each logged in user has meta data as i suspected. we can check the last sign time to know if its a new account. I heard they will be making this easier in the future, check later versions of firebase authentication before attempting this. Upvotes: 2 <issue_comment>username_2: Do not use the creation and sign in timestamp comparison. I found it to be unreliable. `IdpResponse` has a `isNewUser()` method to tell you whether the login is a new account or not. Upvotes: 3
2018/03/18
1,316
4,759
<issue_start>username_0: The code below is simple word count. the file generated by the programme is like ``` key-value: hello 5 world 10 good 4 morning 10 nice 5 ``` But my goal is to count the number of words. The result should be 5, does it mean I need to count the number of keys? If so, how can I count the number of keys? Below is the code for the functionality: Mappers ``` import java.io.IOException; import java.util.StringTokenizer; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Mapper; public class WordCountMapper extends Mapper { private final static IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(Object key,Text value, Context context) throws IOException, InterruptedException { String remove\_pinct = value.toString.replaceAll("[\\pP+~$`^=|<>~`$^+=|<>¥×]", " "); StringTokenizer itr = new StringTokenizer(value.toString()); while(itr.hasMoreTokens()) { word.set(itr.nextToken()); context.write (word,one); } } } ``` Reducer ``` import java.io.IOException; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Reducer; public class WordCountReducer extends Reducer { private IntWritable result = new IntWritable(); public void reduce (Text key, Iterable values, Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } result.set(sum); context.write(key, result); } } ``` Job Control ``` import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; public class WordCountJobControl { public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); Job job = Job.getInstance(conf, " word count "); job.setJarByClass(WordCountJobControl.class); job.setMapperClass(WordCountMapper.class); job.setCombinerClass(WordCountReducer.class); job.setReducerClass(WordCountReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path(args [0])); FileOutputFormat.setOutputPath(job, new Path(args [1])); System.exit(job.waitForCompletion(true) ? 0 : 1); } } ```<issue_comment>username_1: You could constraint the number of reducers to one: ``` job.setNumReduceTasks(1); ``` Then in your reducer count number of invocations of `reduce` method, and write this value in the `cleanup` method, something like this: ``` public class WordCountReducer extends Reducer { private int wordCount; @Override protected void setup(Context context) { wordCount = 0; } @Override protected void reduce(Text key, Iterable values, Context context) throws IOException, InterruptedException { ++wordCount; } @Override protected void cleanup(Context context) throws IOException, InterruptedException { context.write(new Text("WordCount"), new IntWritable(wordCount)); } } ``` You also need to **remove** the line with setting combiner class: ``` job.setCombinerClass(WordCountReducer.class); ``` Upvotes: 2 <issue_comment>username_2: Mapper ``` public class WordCountMapper extends Mapper { protected void map(LongWritablekey,Textvalue,Context context) throws IOException,InterruptedException { String words[]=value.toString().split(","); for(String word:words) context.write(new Text(word),new IntWritable(1)); } } ``` Reducer ``` public class WordCountReducer extends Reducer { protected void reduce(Text word,Iterablevalues,Context context) throws IOException,InterruptedException { int count=0,len; for(IntWritableval:values) count+=val.get(); context.write(new IntWritable(word.toString().length()),new IntWritable(count)); } } ``` Job Control ``` public class WordCountJobControl { public static void main(String args[]) throws Exception { Job job=new Job(); job.setJobName("Length"); job.setJarByClass(WordCountJobControl.class); job.setMapperClass(WordCountMapper.class); job.setReducerClass(WordCountReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job,new Path(args[0])); FileOutputFormat.setOutputPath(job,new Path(args[1])); System.exit(job.waitForCompletion(true)?0:1); } } ``` Upvotes: 2 [selected_answer]
2018/03/18
822
3,078
<issue_start>username_0: I have a javascript function like this which is used with Select2. ``` function formatResult(item) { var markup = '\ \ \ ![](http://mysite/profile/' + item.Username + '_thumb.jpg)\ \ ' + item.FirstName + ' ' + item.LastName + '\ ' + item.Title + '\ \ '; return markup; } ``` And it works, although building the HTML is tedious when making it as a string. What I would like to do is make this into a partial for easier maintainability and editing. Ideally I want something like this. ``` function formatResult_User(item) { var markup = '@Html.Raw(Html.Partial("_UserCardTemplate").ToString().Replace(Environment.NewLine, ""))'; return markup; } ``` But how can I insert the `item` variables back in? Would I just put a placeholder value and use `replace()` on the `markup` variable such as `markup = markup.replace('item.Title', item.Title)`? Is there a better way?<issue_comment>username_1: You could constraint the number of reducers to one: ``` job.setNumReduceTasks(1); ``` Then in your reducer count number of invocations of `reduce` method, and write this value in the `cleanup` method, something like this: ``` public class WordCountReducer extends Reducer { private int wordCount; @Override protected void setup(Context context) { wordCount = 0; } @Override protected void reduce(Text key, Iterable values, Context context) throws IOException, InterruptedException { ++wordCount; } @Override protected void cleanup(Context context) throws IOException, InterruptedException { context.write(new Text("WordCount"), new IntWritable(wordCount)); } } ``` You also need to **remove** the line with setting combiner class: ``` job.setCombinerClass(WordCountReducer.class); ``` Upvotes: 2 <issue_comment>username_2: Mapper ``` public class WordCountMapper extends Mapper { protected void map(LongWritablekey,Textvalue,Context context) throws IOException,InterruptedException { String words[]=value.toString().split(","); for(String word:words) context.write(new Text(word),new IntWritable(1)); } } ``` Reducer ``` public class WordCountReducer extends Reducer { protected void reduce(Text word,Iterablevalues,Context context) throws IOException,InterruptedException { int count=0,len; for(IntWritableval:values) count+=val.get(); context.write(new IntWritable(word.toString().length()),new IntWritable(count)); } } ``` Job Control ``` public class WordCountJobControl { public static void main(String args[]) throws Exception { Job job=new Job(); job.setJobName("Length"); job.setJarByClass(WordCountJobControl.class); job.setMapperClass(WordCountMapper.class); job.setReducerClass(WordCountReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job,new Path(args[0])); FileOutputFormat.setOutputPath(job,new Path(args[1])); System.exit(job.waitForCompletion(true)?0:1); } } ``` Upvotes: 2 [selected_answer]
2018/03/18
1,460
5,146
<issue_start>username_0: I uninstalled a previous version of android and just have install version3.0.1 I created a new application in android studio scratch I have not modified any code but when I try to build the project I get the following errors ``` Error:resource style/Theme.AppCompat.Light.DarkActionBar (aka com.example.android.my:style/Theme.AppCompat.Light.DarkActionBar) not found. C:\AndroidStudioProjects\MyAPP\app\build\intermediates\incremental\mergeDebugResources\merged.dir\values\values.xml Error:(101) error: style attribute 'attr/colorPrimary (aka com.example.android.my:attr/colorPrimary)' not found. Error:(102) error: style attribute 'attr/colorPrimaryDark (aka com.example.android.com.example.android.my:attr/colorPrimaryDark)' not found. Error:(103) error: style attribute 'attr/colorAccent (aka com.example.android.my:attr/colorAccent)' not found. Error:(101) style attribute 'attr/colorPrimary (aka com.example.android.my:attr/colorPrimary)' not found. Error:(102) style attribute 'attr/colorPrimaryDark (aka com.example.android.my:attr/colorPrimaryDark)' not found. Error:(103) style attribute 'attr/colorAccent (aka com.example.android.my:attr/colorAccent)' not found. Error:failed linking references. Error:java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: com.android.tools.aapt2.Aapt2Exception: AAPT2 error: check logs for details Error:java.util.concurrent.ExecutionException: com.android.tools.aapt2.Aapt2Exception: AAPT2 error: check logs for details Error:com.android.tools.aapt2.Aapt2Exception: AAPT2 error: check logs for details Error:Execution failed for task ':app:processDebugResources'. > Failed to execute aapt ``` This is my `build.gradle` file ``` apply plugin: 'com.android.application' android { compileSdkVersion 26 defaultConfig { applicationId "com.example.android.my" minSdkVersion 21 targetSdkVersion 26 versionCode 1 versionName "1.0" testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner" } buildTypes { release { minifyEnabled false proguardFiles getDefaultProguardFile('proguard-android.txt'), 'p roguard-rules.pro' } } } dependencies { implementation fileTree(dir: 'libs', include: ['*.jar']) implementation 'com.android.support:appcompat-v7:26.1.0' implementation 'com.android.support.constraint:constraint-layout:1.0.2' testImplementation 'junit:junit:4.12' androidTestImplementation 'com.android.support.test:runner:1.0.1' androidTestImplementation 'com.android.support.test.espresso:espresso- core:3.0.1' } ``` And this is my `values.xml` file: ``` xml version="1.0" encoding="utf-8"? ..... some codes here .... #FF4081 #3F51B5 #303F9F .... some more attributes here .... MyAppName <!-- Customize your theme here. --> <item name="colorPrimary">@color/colorPrimary</item> <item name="colorPrimaryDark">@color/colorPrimaryDark</item> <item name="colorAccent">@color/colorAccent</item> ``` It seems that that error are some strange behaviors in android studio I have invalidated caches and restarted the project but the issue still exists, How should I solve this issue?<issue_comment>username_1: you have duplicated lines in your gradle file. the fallowings are same ``` compile 'com.android.support:appcompat-v7:26.1.0' implementation 'com.android.support:appcompat-v7:26.1.0' ``` --- **remove following line:** ``` compile 'com.android.support:appcompat-v7:26.1.0' ``` Upvotes: 2 <issue_comment>username_2: If anyone reading this has the same problem, this happened to me recently, and it was due to having the xml header written twice by mistake: ``` xml version="1.0" encoding="utf-8"? xml version="1.0" encoding="utf-8"? ``` The error I was getting was completely unrelated to this file so it was a tough one to find. Just make sure all your new xml files don't have some kind of mistake like this (as it doesn't show up as an error). Upvotes: 3 <issue_comment>username_3: I was receiving this error when doing the [android tutorial on support libraries](https://codelabs.developers.google.com/codelabs/android-training-support-libraries/index.html?index=..%2F..android-training#5) The problem is in your styles.xml ``` \*\*<!-- Customize your theme here. --> <item name="colorPrimary">@color/colorPrimary</item> <item name="colorPrimaryDark">@color/colorPrimaryDark</item> <item name="colorAccent">@color/colorAccent</item>\*\* ``` Just remove those 4 item lines above and it will work. Perhaps because you are using a Compat library and the theme is not defined. Upvotes: 2 <issue_comment>username_4: You don't need to change anything in dependencies. If you are combining multi-module, maybe they are conflicted , you must add prefix "android:". In this situation, I think you just update your file styles.xml ``` <!-- Customize your theme here. --> <item name="android:colorPrimary">@color/colorPrimary</item> <item name="android:colorPrimaryDark">@color/colorPrimaryDark</item> <item name="android:colorAccent">@color/colorAccent</item> ``` It's worked with my source. Upvotes: 0
2018/03/18
790
3,026
<issue_start>username_0: The purpose is to change color of all characters in #text one by one, I made a loop: ``` function myFunction() { var letters = document.getElementById('text'); for (var i = 0; i < letters.innerHTML.length; i++) { //only change the one you want to letters.innerHTML = letters.innerHTML.replace(letters[i], ''+letters[i]+''); } } ``` It doesnt work but also doesnt show any errors. <https://jsfiddle.net/zkbctk2h/><issue_comment>username_1: This is because you are updating the letters, and reading the next letter afterwards. You should use innerText instead of innerHTML because then you only get the text. Example fiddle: <https://jsfiddle.net/zkbctk2h/25/> ``` function myFunction() { var letters = document.getElementById('text'), str = letters.innerText, newString = ""; for (var i = 0; i < str.length; i++) { //only change the one you want to newString += ''+str[i]+''; } letters.innerHTML = newString; } ``` I suggest to read once and write once to the dom. If you read and write a force redraw is triggered in the browser. Therefor it can get slow if you have large text. Upvotes: 1 <issue_comment>username_2: I suggest to store the text of the element with `id = "text"` and build a new string out of the old text, because replace would replace the first found character which may not the wanted character, because the replaced character cold contain a character which should not be replaced. ```js function myFunction() { var letters = document.getElementById('text'), text = letters.innerHTML letters.innerHTML = ''; for (var i = 0; i < text.length; i++) { letters.innerHTML += '' + text[i] + ''; } } myFunction(); ``` ```html abracadabra ``` Some typewriter functionality with [`setInterval`](https://developer.mozilla.org/en-US/docs/Web/API/WindowOrWorkerGlobalScope/setInterval) and [`clearInterval`](https://developer.mozilla.org/en-US/docs/Web/API/WindowOrWorkerGlobalScope/clearInterval) ```js function myFunction() { var letters = document.getElementById('text'), text = letters.innerHTML, i = 0; return function () { var j; if (i < text.length) { letters.innerHTML = ''; for (j = 0; j <= i; j++) { letters.innerHTML += '' + text[j] + ''; } letters.innerHTML += text.slice(j); i++; } else { clearInterval(interval); } } } var interval = setInterval(myFunction(), 500); ``` ```html abracadabra ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: Just suggesting a more functional approach: ```js const myFunction = id => document.getElementById(id).innerHTML .split('') .map(c => `${c}` ) .join('') document.getElementById('text').innerHTML = myFunction('text') ``` ```html Hello World ``` Upvotes: 0
2018/03/18
439
1,550
<issue_start>username_0: I am using SQL Server. This query: ``` SELECT 3/0 AS X ``` returns an error, as expected: > > Msg 8134, Level 16, State 1, Line 1 > > Divide by zero error encountered. > > > But `IF EXISTS` treats X as a row that exists. This query: ``` IF EXISTS (SELECT 3/0 AS X) PRINT 'SUCCESS' ELSE PRINT 'FAILURE' ``` returns 'SUCCESS'. I couldn't find anything in the IF EXISTS documentation that explains this behavior. Can anyone shed some light on this?<issue_comment>username_1: It does not exactly "ignore" errors specifically. It is ignoring everything in the `select` clause (with the exception of `top (0)`). `exists` is checking to see if any *rows* are returned by the query. As an optimization, it ignores the `select` part of the query, because this does not affect the existence of rows. Incidentally, you get the same effect with `case`: ``` select (case when 1=1 then 1 else 1/0 end) ``` also returns 1 rather than an error. Upvotes: 3 [selected_answer]<issue_comment>username_2: The expression in the `SELECT` statement does not need to be evaluated in order for SQL Server to process the EXISTS predicate. Look at the execution plan to see what the optimizer does with this query: [![IF EXISTS Plan](https://i.stack.imgur.com/dGati.png)](https://i.stack.imgur.com/dGati.png) Note the `VALUES` returned by the scalar operator are not those that were specified in the `SELECT` clause. No error occurs because the actual expression used in the query does not divide by zero. Upvotes: 2
2018/03/18
901
3,751
<issue_start>username_0: I'm incredibly new to C#, so please ignore my ignorance with this question. But I've searched online and can't find a solution that makes sense. Basically, I have a list being stored in a CommonClass.CS, as below: ``` using System; using System.Collections.Generic; namespace NewParkingApp { public class CommonClass { public List < string > itemData = new List < string > () { "Android", "IOS", "Windows Phone", "Xamarin-IOS", "Xamarin-Form", "Xamarin_Android" }; } } ``` From another class, I'd then like to be able to access that list. This is what I have so far for the accessing class: ``` using Foundation; using System; using System.Collections.Generic; using UIKit; namespace NewParkingApp { public partial class ResultViewController : UIViewController { public ResultViewController (IntPtr handle) : base (handle) { } public override void ViewDidLoad() { base.ViewDidLoad(); List itemData = CommonClass.itemData; mainListview.Source = new TableViewSource(itemData); } public override void DidReceiveMemoryWarning() { base.DidReceiveMemoryWarning(); // Release any cached data, images, etc that aren't in use. } } } ``` As an addition, it would also be useful to know how to add items to that list. But I'm guessing that should be pretty easy once I know how to access it? Thanks in advance!<issue_comment>username_1: Before continuing any further you have to know whether you want this list to be *shared* among many objects, or *unique* for each instance of `CommonClass` you create. As the comments mention: * For a unique list per each `CommonClass` instance change the `itemData` assignment to use an instanced member: ``` CommonClass commonClass = new CommonClass(); List itemData = commonClass.itemData; ``` * For a shared list add the `static` keyword to your class's `itemData` as such: `public static List itemData = new List()` Also note that the `List` class is only suitable for one thread operations only (i.e. only one object can read or write to the list at any given time), and once you start dealing with multi-threading you should opt for a thread-safe type. Afterwards you can just use `itemData.Add("whatever");`, but really this can be easily checked online. [Here's just one example at dotnetperls.com](https://www.dotnetperls.com/list) Upvotes: 2 <issue_comment>username_2: ``` //firt change your class commanclass with that code using System; using System.Collections.Generic; namespace NewParkingApp { public class CommonClass { public static List < string > itemData = new List < string > () { "Android", "IOS", "Windows Phone", "Xamarin-IOS", "Xamarin-Form", "Xamarin_Android" }; } } //after class your main class in past using Foundation; using System; using System.Collections.Generic; using UIKit; namespace NewParkingApp { public partial class ResultViewController : UIViewController { public ResultViewController (IntPtr handle) : base (handle) { } public override void ViewDidLoad() { base.ViewDidLoad(); List itemData = CommonClass.itemData; mainListview.Source = new TableViewSource(itemData); } enter code here public override void DidReceiveMemoryWarning() { base.DidReceiveMemoryWarning(); // Release any cached data, images, etc that aren't in use. } } } ``` Upvotes: 1
2018/03/18
1,195
5,227
<issue_start>username_0: I am trying to build a Reminder App. In this app, user will select a WiFi name which is saved on the phone. And when the user connects to that WiFi, Phone is going to vibrate. And after phone vibrates, service is going to die. I have tried to write a Service to check the WiFi name in every 1 minute. When I close the app, it runs again but not as I expected. Service Implementation: ``` public class AlarmService extends Service { private static final String TAG = "com.wificheckalways"; public static String wifiToCheck = MainActivity.wifiName; public AlarmService(){ } public int onStartCommand(Intent intent, int flags, int startId){ Log.i(TAG, "onStartCommand called"); Runnable runnable = new Runnable() { @Override public void run() { String ssid = ""; ConnectivityManager connManager = (ConnectivityManager) getSystemService(Context.CONNECTIVITY_SERVICE); NetworkInfo networkInfo = connManager.getNetworkInfo(ConnectivityManager.TYPE_WIFI); if (networkInfo.isConnected()) { final WifiManager wifiManager = (WifiManager) getApplicationContext().getSystemService(Context.WIFI_SERVICE); final WifiInfo connectionInfo = wifiManager.getConnectionInfo(); if (connectionInfo != null && !TextUtils.isEmpty(connectionInfo.getSSID())) { ssid = connectionInfo.getSSID(); } } while(!wifiToCheck.equals(ssid)){ Log.i(TAG, "Goes to SLEEP"); synchronized (this){ try{ Thread.sleep(60000); Log.i(TAG, "WAKE UP"); } catch (InterruptedException e) { e.printStackTrace(); } } networkInfo = connManager.getNetworkInfo(ConnectivityManager.TYPE_WIFI); if (networkInfo.isConnected()) { final WifiManager wifiManager = (WifiManager) getApplicationContext().getSystemService(Context.WIFI_SERVICE); final WifiInfo connectionInfo = wifiManager.getConnectionInfo(); if (connectionInfo != null && !TextUtils.isEmpty(connectionInfo.getSSID())) { ssid = connectionInfo.getSSID(); Log.i(TAG, "Compare " + wifiToCheck + " with " + ssid); } } } if(wifiToCheck.equals(ssid)) { Vibrator v = (Vibrator) getSystemService(Context.VIBRATOR_SERVICE); v.vibrate(2000); // 2000 miliseconds = 2 seconds } } }; Thread alarmThread = new Thread(runnable); alarmThread.start(); return Service.START_STICKY; //Restart service if it got destroyed } public void onDestroy(){ Log.i(TAG, "onDestroyCommand called"); } @Nullable @Override public IBinder onBind(Intent intent) { return null; } ``` Where I start the service in MainActivity.java: ``` public void SetAlarm(View view) { if (!wifiName.equals("")) { Toast.makeText(getApplicationContext(), "Alarm set for: " + wifiName, Toast.LENGTH_SHORT).show(); Intent intent = new Intent(this, AlarmService.class); startService(intent); } else{ Toast.makeText(getApplicationContext(), "Please choose a WiFi name!!!", Toast.LENGTH_SHORT).show(); } } ``` wifiName is the choosen Wifi. What is wrong with this code?<issue_comment>username_1: I guess you have to stop the service in activty `onPause()` or `onDestroy()` methods like `stopService(new Intent(MainActivity.this,AlarmService.class));` Upvotes: 0 <issue_comment>username_2: As I suggested in the comments, your service could look like this: ``` public class AlarmService extends Service { public AlarmService() {} @Override public void onCreate() { super.onCreate(); IntentFilter intentFilter = new IntentFilter(WifiManager.NETWORK_STATE_CHANGED_ACTION); registerReceiver(receiver, intentFilter); } @Override public void onDestroy() { super.onDestroy(); unregisterReceiver(receiver); } private BroadcastReceiver receiver = new BroadcastReceiver() { @Override public void onReceive(Context context, Intent intent) { String action = intent.getAction(); if (action.equals(WifiManager.NETWORK_STATE_CHANGED_ACTION)) { NetworkInfo networkInfo = intent.getParcelableExtra(WifiManager.EXTRA_NETWORK_INFO); if (networkInfo.isConnected()) { if (networkInfo.getType() == ConnectivityManager.TYPE_WIFI) { WifiManager wifiManager = (WifiManager) getApplicationContext().getSystemService(Context.WIFI_SERVICE); String ssid = wifiManager.getConnectionInfo().getSSID(); if (ssid.equals(<>)) { Vibrator vibrator = (Vibrator) getSystemService(Context.VIBRATOR\_SERVICE); vibrator.vibrate(2000); stopSelf(); } } } } } }; } ``` The only part that's "missing" is entered SSID. Upvotes: 2 [selected_answer]
2018/03/18
321
1,066
<issue_start>username_0: IS there a way to retrieve back my laravel app files after I typed > > git rm -r \* > > > I just want to delete the files that I just deleted in the directory with > > git rm data.txt > > > But all of them. Then I mistakenly typed > > git rm -r \* > > > Now most of my app directories are all gone. these are now what's left ![App FOlder left](https://i.stack.imgur.com/44k0q.jpg)<issue_comment>username_1: As noted in the comment by Inertia, `git reset --hard` would return your index and working tree to the last committed state. You could also do a `git reset HEAD` to get your index back and then `git checkout [files or folders]` if you want to keep any other uncommitted changes. Upvotes: 1 <issue_comment>username_2: In order to restore just one specific deleted file, make sure to add `--` before the file name: ``` git reset -- path/to/deleted/file && git checkout -- path/to/deleted/file ``` The `--` is needed as a qualifier for a file, as git would not know whether it's a file or a branch. Upvotes: 0
2018/03/18
442
1,456
<issue_start>username_0: Is it possible to get the `HWND` from a window by a process name? The name of the window changes on every restart (random windowname), like this: [![image](https://i.stack.imgur.com/3dCGG.png)](https://i.stack.imgur.com/3dCGG.png) I just found something to get it by window name.<issue_comment>username_1: The connection between processes and windows is not obvious. First of all, a process can have several windows. Second, it looks like Windows APIs do not provide a method to look up windows based on a process (which I find weird, given your screenshot shows just that). However, you can go through *all* open windows and filter based on a process. See this question with an elaborate answer on how to do it: [Find Window and change it's name](https://stackoverflow.com/questions/35917056/find-window-and-change-its-name) Upvotes: 1 <issue_comment>username_2: I have found this solution, but I get multiple HWNDs for the one process ID ``` #include void GetAllWindowsFromProcessID(DWORD searchPID, std::vector &wnds) { HWND hCurWnd = NULL; do { hCurWnd = FindWindowEx(NULL, hCurWnd, NULL, NULL); DWORD processID = 0; GetWindowThreadProcessId(hCurWnd, &processID); if (searchPID == processID) { wnds.push\_back(hCurWnd); } } while (hCurWnd != NULL); } int main() { DWORD PID = 0x00001D7C; std::vector HWND\_List; GetAllWindowsFromProcessID(PID, HWND\_List); return 0; } ``` Upvotes: -1 [selected_answer]
2018/03/18
951
3,415
<issue_start>username_0: I want to navigate back from DrawerNav to Login. Using alert('Alert') inside the function is OK. I have a StackNavigator with Login and DrawerNav ``` const MyStackNavigator = StackNavigator({ Login : { screen : Login }, DrawerNav : { screen : DrawerNav } }, { navigationOptions : { header : false } } ); ``` From Login i can navigate to my main screen DrawerNav using ``` _login = () => { this.props.navigation.navigate('DrawerNav'); } ``` Inside the DrawerNav is a DrawerNavigator (obviously) ``` const MyDrawerNavigator = DrawerNavigator({ ... screens ... }, { initialRouteName : '', contentComponent : CustomContent, drawerOpenRoute : 'DrawerOpen', drawerCloseRoute : 'DrawerClose', drawerToggleRoute : 'DrawerToggle' } ); const CustomContent = (props) => ( Logout ) ``` As you can see, the logout is not part of the menu but inside the Drawer ``` _logout = () => { this.props.navigation.navigate('Login'); } ``` This gives me an error ``` undefined is not an object (evaluating '_this.props.navigation') ```<issue_comment>username_1: Problem ------- The `this` in `this._logout` is meant for referencing a `CustomContent` instance, but since `CustomContent` is a functional component (which doesn't have instances), any `this` reference shouldn't be valid but is not throwing an error due to [a bug](https://github.com/facebook/react-native/issues/18942). Solution -------- Change ``` onPress={this._logout} ``` to ``` onPress={() => props.navigation.navigate('Login')} ``` Upvotes: 0 <issue_comment>username_2: As you are quite deep in your navigation stack wouldn't it make sense to use the `popToTop` function? In the same file make sure that you define your `_logout` function as your `CustomContent`, then you could do something like the following: 1. Update the function call by passing `props` to them 2. Update `_logout` to use `popToTop()` Here is how my code would look. ``` _logout = (props) => { props.navigation.popToTop(); } const CustomContent = (props) => ( this.\_logout(props) }> Logout ) ``` You will want to pass the `props` to the function as that will contain your `navigation` prop and allow you to make calls on your navigation stack. Upvotes: 1 <issue_comment>username_3: Change your customComponent to class component from functional.Like.. ``` class CustomContent extends Component { _logout = () => { const resetAction = NavigationActions.reset({ key: null, index: 0, actions: [NavigationActions.navigate({ routeName: 'Login' })], }); this.props.navigation.dispatch(resetAction); } render() { return ( Logout ); } } ``` Upvotes: 0 <issue_comment>username_4: *NOTE:* [Best practice to implement auth screens](https://reactnavigation.org/docs/en/auth-flow.html) is by using [switch navigator](https://reactnavigation.org/docs/en/switch-navigator.html). If you're supposed not to use it for some reason, the following stuffs could help. **1:** As @Asrith K S answered, change your functional component to class component and write `_logout` as class function, where `this.props` would have `navigation` (or) **2:** write navigate action as [anonymous function](https://stackoverflow.com/a/19159927/8359363). ```react const CustomContent = (props) => ( props.navigation.navigate("Login") }> Logout ) ``` Upvotes: 0
2018/03/18
316
1,113
<issue_start>username_0: I have already used this in my webpack.config.js ``` use: [{ loader: 'babel-loader', options: { presets: ['es2015', 'es2016', 'react'] } }] ``` But still I am getting error at the token **let** which I have used. > > I get that uglify doesn't understands ECMAScript-6 > > > Now when i build my webpack with -p, i get the mentioned error, because uglify comes up there. Now, how can i solve this problem as I have already included babel-loader preset es2015 to convert es6 to es5.<issue_comment>username_1: As you've said, your current version of the Uglify plugin doesn't support ES6, so you'll need to upgrade. You have a few options: 1. Upgrade to Webpack 4, which includes the new uglify plugin by default 2. If you need to stay on v3 for whatever reason, [you can follow the instructions on the docs here](https://webpack.js.org/plugins/uglifyjs-webpack-plugin/) to install the new uglify plugin and use it manually. Upvotes: 3 [selected_answer]<issue_comment>username_2: Use terser-webpack-plugin for minifying ES6 code Upvotes: 1
2018/03/18
252
984
<issue_start>username_0: My question is quite simple I think : I'm using jQuery Mobile. And I have on 3 pages exactly the same form for the same purpose. I would like when I input something on any of theme it apply's on the others so when the user changes pages he will see his inputs. I've been searching on the web all I could find is input from 1 specif form to apply to the other but not from any of theme to all. Thanks<issue_comment>username_1: As you've said, your current version of the Uglify plugin doesn't support ES6, so you'll need to upgrade. You have a few options: 1. Upgrade to Webpack 4, which includes the new uglify plugin by default 2. If you need to stay on v3 for whatever reason, [you can follow the instructions on the docs here](https://webpack.js.org/plugins/uglifyjs-webpack-plugin/) to install the new uglify plugin and use it manually. Upvotes: 3 [selected_answer]<issue_comment>username_2: Use terser-webpack-plugin for minifying ES6 code Upvotes: 1
2018/03/18
794
2,907
<issue_start>username_0: I created a new folder Test. run command: git init then I setUp global configuration - mail and name then I added: > > git remote add origin <https://bvcdata.visualstudio.com/_git/MyRepo> > > git push -u origin --all > > > and immediately get error: > > remote: TF401019: The Git repository with name or identifier sdsdsd does not exist or you do not have permissions for the operation you are attempting. > fatal: repository '<https://xxxxxx.visualstudio.com/_git/MyRepo/>' not found > > > I am the owner of the project on VSTS, so I have all privilegies. It all was working till now. Thanks for feedback.<issue_comment>username_1: TLDR: * Delete the `%LocalAppData%\GitCredentialManager\tenant.cache` file on your machine * If that's not enough, run `git credential-manager clear` This error message came up for me when the repository credentials Git had cached became stale. Diagnostic stages for me were: 1. Do I have access to the repository? Check this by browsing to the Code part of your VSTS repository. [![enter image description here](https://i.stack.imgur.com/WZ30g.jpg)](https://i.stack.imgur.com/WZ30g.jpg) 2. If I think I've cleared the Git credential cache (which I first did by just deleting the tenant.cache file), am I prompted for credentials when doing a command-line `git fetch`? Upvotes: 4 [selected_answer]<issue_comment>username_2: I meet the same problem in Ubuntu 18.04.2 LTS and I fixed it by following steps: 1. empty the old .git-credentials file. my path is ~/.git-credentials. 2. execute `git config --global credential.helper store`. 3. execute `git clone` it will ask your username and password. 4. success Upvotes: 0 <issue_comment>username_3: For me, the issue was in using the organization account. I removed the organization qualifier and cloned the repository by **my username** after generating custom credentials from the provided option on devops. The remote URL was changed from:: https://**OrgAccount**@dev.azure.com/Org1/Project/\_git/Mobile%20App To:: https://**Jainam**@dev.azure.com/Org1/Project/\_git/Mobile%20App -- And it worked for me! Upvotes: 0 <issue_comment>username_4: I had this trouble and to repair it, i have removed some generic credentials into : Control Panel -> identification manager -> generic credential in french version : panneau de configuration -> gestionnaire d'identification -> informations d'identification windows The trouble come from my git mail connexion have been moved to another and windows store it and use last every time it. Upvotes: 0 <issue_comment>username_5: Adding another option for solution that worked for us. 1. Opening Windows' **Credential Manager** 2. Moving to **Windows Credentials** tab 3. On 'Generic Credentials' removing the entry that starts with 'git:', You probably see 'git:https://' ![ ](https://i.stack.imgur.com/v0Qva.png) Upvotes: 0
2018/03/18
768
2,589
<issue_start>username_0: How could I check the **platform** (`Android/iOS`) at runtime? I’d like to differ my flutter application behaviour if I'm on `Android` rather than I'm on `iOS`. Like this: ``` _openMap() async { // Android var url = 'geo:52.32,4.917'; if (/* i'm on iOS */) { url = 'http://maps.apple.com/?ll=52.32,4.917'; } if (await canLaunch(url)) { await launch(url); } else { throw 'Could not launch $url'; } } ```<issue_comment>username_1: I've search a bit on SO and tryed also to Google it but this scenario is not so well indexed and so I think my question and anser could help starting developing on Flutter. If you have to check the OS or the Platform of your device at runtime you could use the [`Platform` class of `dart.io` library](https://api.dartlang.org/stable/1.24.3/dart-io/Platform-class.html). ``` import 'dart:io' ``` This way you could check in a way like that: ``` _openMap() async { // Android var url = 'geo:52.32,4.917'; if (Platform.isIOS) { // iOS url = 'http://maps.apple.com/?ll=52.32,4.917'; } else if (Platform.isWindows) { // TODO - something to do? } if (await canLaunch(url)) { await launch(url); } else { throw 'Could not launch $url'; } } ``` Instead if you also need some deep insight of the device you could use the [`dart device_info package`](https://pub.dartlang.org/packages/device_info). There's a good example [here](https://pub.dartlang.org/packages/device_info#-example-tab-). This way you could also check not only the platform you are running on but also the specific version of OS (`iOS 9, 10.3, 11.x, Lollipop, Jellybean`, etc.) and many others device info. **UPDATE:** After Flutter Live 2018 --> Look at this gr8 youtube [video](https://youtu.be/0q2beiiXD98) for Platform Aware Widget and the best way to be compliant with Android and iOS UI from the same codebase. Upvotes: 5 [selected_answer]<issue_comment>username_2: The recommended way to get the current platform is by using `Theme`. ``` Theme.of(context).platform ``` This way, you could potentially override that value with a custom `Theme` at runtime and see immediately all the changes. Upvotes: 4 <issue_comment>username_3: ``` import 'dart:io' String os = Platform.operatingSystem; ``` This is simple way to check platform and also allows a lot of other useful information about the device being used. Link to docs about the [Platform class](https://pub.dev/documentation/platform/latest/platform/Platform-class.html). Upvotes: 2
2018/03/18
784
2,647
<issue_start>username_0: Currently working on a practice project where I am utilising the angular knowledge I have gathered and continue gathering through the online courses I am taking. I am having a bit of trouble accessing a nested array. I saw an online tutorial and I have followed the basis of what was shown but I am obviously not doing something right. The code is shown below. ``` ![]() {{films.title}} =============== {{films.plot}} Starring:{{films.starring}} Runtime:{{films.runtime}} * {{timing.time1}} * {{timing.time2}} * {{timing.time3}} * {{timing.time4} * {{timing.time5}} * {{timing.time6}} ``` And below is a template of the first object in the array. ``` var myapp = angular.module("myapp", []); myapp.controller("maincontroller", ["$scope", function($scope){ $scope.movies = [ { cover: "images/movie-images/ferres.jpg", title: "<NAME>'s Day Off (1987)", plot: "High school student <NAME> wants a day off from school and he's developed an incredibly sophisticated plan to pull it off. He talks his friend Cameron into taking his father's prized Ferrari and with his girlfriend Sloane head into Chicago for the day. While they are taking in what the city has to offer school principal <NAME> is convinced that Ferris is, not for the first time, playing hooky for the day and is hell bent to catch him out. Ferris has anticipated that, much to Rooney's chagrin.", starring: "<NAME>, <NAME>, <NAME>", runtime: "103min", monTimings: [ { time1: "10:45", time2: "13:10", time3: "15:45", time4: "18:25", time5: "20:30" , time6: "21:50", } ] }, ``` When I go to the view, the list in which the timings are supposed to be displayed are not there at all.<issue_comment>username_1: 1. You are missing `in` at your second **ng-repeat** 2. You should **ng-repeat** on `films.monTimings` since `movies.monTimings` doesn't exist since `movies` is an array which doesn't have the `monTimings` property. Only the elements it contains has that. So change this: ``` ng-repeat="timing movies.monTimings" ``` To this: ``` ng-repeat="timing in films.monTimings" ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: You have a mistake in this line . It should be: ``` ``` You have forgotten `in` in `ng-repeat`. Upvotes: 1 <issue_comment>username_3: Since it is a nested array, we need two loops for iteration. You can achieve it by adding the second loop for monTimings in first div as below ``` ``` Upvotes: 2
2018/03/18
463
1,702
<issue_start>username_0: I am trying to set up virtualenv using virtualenvwrapper for my Django project following this guide : [Django Tutorial](https://developer.mozilla.org/en-US/docs/Learn/Server-side/Django/development_environment#Using_a_virtual_environment). However, after installing and writing, ``` export WORKON_HOME=$HOME/.virtualenvs export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3 export PROJECT_HOME=$HOME/Devel source /usr/local/bin/virtualenvwrapper.sh ``` and trying to run source ~/.bash\_profile, i kept getting no directory error. I researched about this error and thought that the error kept coming up because I installed python3 with homebrew. Therefore, i changed VIRTUALENVWRAPPER\_PYTHON directory to /usr/local/Cellar/python/3.6.4\_4 virtualenvwrapper.sh. But now I am getting this error: ``` virtualenvwrapper_run_hook:12: permission denied: /usr/local/Cellar/python/3.6.4_4 virtualenvwrapper.sh: There was a problem running the initialization hooks. If Python could not import the module virtualenvwrapper.hook_loader, check that virtualenvwrapper has been installed for VIRTUALENVWRAPPER_PYTHON=/usr/local/Cellar/python/3.6.4_4 and that PATH is set properly. ``` How can i reset the PATH so that I can use virtualenvwrapper?<issue_comment>username_1: You can try it, add this to your `~/.bashrc`, it works for me ``` export WORKON_HOME=$HOME/.virtualenvs export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3 export VIRTUALENVWRAPPER_VIRTUALENV=/usr/local/bin/virtualenv source /usr/local/bin/virtualenvwrapper.sh ``` Upvotes: 5 [selected_answer]<issue_comment>username_2: Install virtualenvwrapper using pip ``` sudo pip3 install virtualenvwrapper ``` Upvotes: 0
2018/03/18
465
1,647
<issue_start>username_0: I'm quite new to scikit learn, but wanted to try an interesting project. I have longitude and latitudes for points in the UK, which I used to create cluster centers using scikit learns KMeans class. To visualise this data, rather than having the points as clusters, I wanted to instead draw boundaries around each cluster. For example, if one cluster was London and the other Oxford, I currently have a point at the center of each city, but I was wondering if there's a way to use this data to create a boundary line based on my clusters instead? Here is my code so far to create the clusters: ``` import pandas as pd import matplotlib.pyplot as plt from sklearn.cluster import KMeans location1="XXX" df = pd.read_csv(location1, encoding = "ISO-8859-1") #Run kmeans clustering X = df[['long','lat']].values #~2k locations in the UK y=df['label'].values #Label is a 0 or 1 kmeans = KMeans(n_clusters=30, random_state=0).fit(X, y) centers=kmeans.cluster_centers_ plt.scatter(centers[:,0],centers[:,1], marker='s', s=100) ``` So I would like to be able to convert the centers in the above example to lines that demarcate each of the regions -- is this possible? Thanks, Anant<issue_comment>username_1: You can try it, add this to your `~/.bashrc`, it works for me ``` export WORKON_HOME=$HOME/.virtualenvs export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3 export VIRTUALENVWRAPPER_VIRTUALENV=/usr/local/bin/virtualenv source /usr/local/bin/virtualenvwrapper.sh ``` Upvotes: 5 [selected_answer]<issue_comment>username_2: Install virtualenvwrapper using pip ``` sudo pip3 install virtualenvwrapper ``` Upvotes: 0
2018/03/18
961
3,145
<issue_start>username_0: I've two collections, editors and books. Each book is associated with a parentId field to an editor and each book is has a score ( say 1, 2, 3 ) and a type ( sci-fi, romance, etc ...) . Editors: ``` { _id: 1, name: "editor1" } { _id: 2, name: "editor2" } ... ``` and the books ``` { _id: 1, name: "book1", score: 1, parentId: 1, type: "sci-fi" } { _id: 2, name: "book2", score: 3, parentId: 1, type: "romance" } { _id: n, name: "bookn", score: 1, parentId: m, type: "detective" } ``` I want to write an aggregation that will sort the editors based on the score of all books associated to it, and optionally by type of the books. So i can retrieve the first 10 editors of sci-fi with the most popular books, or just the first 10 editors with the most popular books regardless of categories. The catch ? Using mongo 3.2 . I've the strong impression that this is possible with 3.4 and 3.6 (And I'd love to see how), but at the moment, the product I'm shipping is with mongo 3.2 and I can't change that ... Aggregating on the editors collection, I've tried to first lookup all the books for an editor, then unwind, group by parentId and create a new field score with the sum of the score of all books in this group, but then I'm stuck trying to use this score to associate it to the editors and finally sort the result. I'm using this aggregation in a meteor subscription.<issue_comment>username_1: Try following aggregation: ``` db.Editors.aggregate([ { $lookup: { from: "Books", localField: "_id", foreignField: "parentId", as: "books" } }, { $unwind: "$books" }, { $group: { _id: "_id", name: { $first: "$name" }, types: { $addToSet: "$books.type" }, totalScore: { $sum: "$books.score" } } }, { $match: { "types": "sci-fi" } }, { $sort: { "totalScore": -1 } } ]) ``` One catch with `$match` stage. You can use it before `$group` to calculate scores only for `sci-fi` **books** or after `$group` to calculate score for all books but then get only those authors having at least one `sci-fi` book (as in my answer) Upvotes: 1 <issue_comment>username_2: You can try below aggregation. > > So i can retrieve the first 10 editors of sci-fi with the most popular > books > > > ``` db.editors.aggregate([ {"$lookup":{ "from":"books", "localField":"_id", "foreignField":"parentId", "as":"books" }}, {"$unwind":"$books"}, {"$match":{"books.type":"sci-fi"}}, {"$group":{ "_id":"$_id", "name":{"$first":"$name"}, "scores":{"$sum":"$books.score"} }}, {"$sort":{"scores":-1}}, {"$limit":10} ]) ``` > > or just the first 10 editors with the most popular books regardless of > categories. > > > ``` db.editors.aggregate([ {"$lookup":{ "from":"books", "localField":"_id", "foreignField":"parentId", "as":"books" }}, {"$project":{ "name":1, "scores":{"$sum":"$books.score"} }}, {"$sort":{"scores":-1}} ]) ``` Upvotes: 3 [selected_answer]
2018/03/18
2,978
7,340
<issue_start>username_0: I get the following error when running a c program: ``` *** Error in `./a.out': double free or corruption (!prev): 0x0000000000bb0470 *** ``` I believe this is due to fclose() being called in the program, It is a Lexical Analyzer for compilers in c language and it uses file pointers . Here is code: ``` #include #include typedef struct Terminal\_table { int index; char symbol[10],indicator; }Terminal\_table; typedef struct Identifier\_table { int index; char name[10]; }Identifier\_table; typedef struct Literal\_table { int SR,name,precision; char base[10],scale[10]; }Literal\_table; typedef struct Uniform\_symbol\_table { int SR,index; char name[10],symbol\_class[10]; }Uniform\_symbol\_table; int main() { FILE \*fp\_PGM,\*fp\_TT,\*fp\_LT,\*fp\_UST,\*fp\_IT,\*fp\_IT1,\*fp\_LT1; Terminal\_table TT; Identifier\_table IT,IT1; Literal\_table LT,LT1; Uniform\_symbol\_table UST; int i=0,flag,flag\_IT,flag\_LT,a; char ch,buffer[10]; fp\_PGM=fopen("PGM\_LEX.txt","r"); fp\_UST=fopen("UST.TXT","w"); fp\_IT=fopen("IT.TXT","w"); fp\_LT=fopen("LT.TXT","w"); UST.SR=1; IT.index=1; LT.SR=1; for(i=0;i<10;i++) buffer[i]='\0'; i=0; while(!feof(fp\_PGM)) { ch=fgetc(fp\_PGM); if(isalpha(ch) || isdigit(ch)) buffer[i++]=ch; else if(ch!='"') { flag=0; fp\_TT=fopen("TT.txt","r"); while(!feof(fp\_TT)) { fscanf(fp\_TT,"%d %s %c\n",&TT.index,&TT.symbol,&TT.indicator); if(strcmp(TT.symbol,buffer)==0) { flag=1; strcpy(UST.name,buffer); UST.index=TT.index; if(TT.indicator=='Y') strcpy(UST.symbol\_class,"TRM"); else strcpy(UST.symbol\_class,"KEY"); fprintf(fp\_UST,"%d %s %s %d \n",UST.SR++,UST.name,UST.symbol\_class,UST.index); break; } } fclose(fp\_TT); if(flag==0) { if(isalpha(buffer[0])) { flag\_IT=0; fclose(fp\_IT); fp\_IT1=fopen("IT.TXT","r"); while(!feof(fp\_IT1)) { fscanf(fp\_IT1,"%d %s\n",&IT1.index,IT1.name); if(strcmp(IT1.name,buffer)==0) { flag\_IT=1; break; } } fclose(fp\_IT1); strcpy(UST.name,buffer); UST.index=TT.index; strcpy(UST.symbol\_class,"IDN"); if(flag\_IT==1) fprintf(fp\_UST,"%d %s %s %d \n",UST.SR++,UST.name,UST.symbol\_class,IT1.index); if(flag\_IT==0) { fp\_IT=fopen("IT.TXT","a"); strcpy(IT.name,buffer); fprintf(fp\_IT,"%d %s\n",IT.index++,IT.name); fprintf(fp\_UST,"%d %s %s %d \n",UST.SR++,UST.name,UST.symbol\_class,IT.index-1); fclose(fp\_IT); } } else if(isdigit(buffer[0])) { flag\_LT=0; fclose(fp\_LT); fp\_LT=fopen("LT.TXT","r"); while(!feof(fp\_LT)) { fscanf(fp\_LT,"%d %d %s %s %s\n",&LT1.SR,&LT1.name,&LT1.precision,&LT1.base,&LT1.scale); a=atoi(buffer); if(LT1.name==a) { flag\_LT=1; break; } } fclose(fp\_LT); strcpy(UST.name,buffer); UST.index=TT.index; strcpy(UST.symbol\_class,"LIT"); if(flag\_LT==1) fprintf(fp\_UST,"%d %s %s %d \n",UST.SR++,UST.name,UST.symbol\_class,LT1.SR); if(flag\_LT==0) { fclose(fp\_LT); fp\_LT=fopen("LT.TXT","a"); LT.name=atoi(buffer); LT.precision=2; strcpy(LT.base,"DECIMAL"); strcpy(LT.scale,"FIXED"); strcpy(UST.name,buffer); fprintf(fp\_LT,"%d %d %d %s %s\n",LT.SR++,LT.name,LT.precision,LT.base,LT.scale); fprintf(fp\_UST,"%d %s %s %d \n",UST.SR++,UST.name,UST.symbol\_class,LT.SR-1); fclose(fp\_LT); } } } for(i=0;i<10;i++) buffer[i]='\0'; buffer[0]=ch; fp\_TT=fopen("TT.txt","r"); while(!feof(fp\_TT)) { fscanf(fp\_TT,"%d %s %c\n",&TT.index,&TT.symbol,&TT.indicator); if(strcmp(TT.symbol,buffer)==0) { strcpy(UST.name,buffer); UST.index=TT.index; strcpy(UST.symbol\_class,"TRM"); fprintf(fp\_UST,"%d %s %s %d \n",UST.SR++,UST.name,UST.symbol\_class,UST.index); break; } } for(i=0;i<10;i++) buffer[i]='\0'; fclose(fp\_TT); i=0; } else if(ch=='"') { buffer[0]=ch; fp\_TT=fopen("TT.txt","r"); while(!feof(fp\_TT)) { fscanf(fp\_TT,"%d %s %c\n",&TT.index,&TT.symbol,&TT.indicator); if(strcmp(TT.symbol,buffer)==0) { strcpy(UST.name,buffer); UST.index=TT.index; if(TT.indicator=='Y') strcpy(UST.symbol\_class,"TRM"); else strcpy(UST.symbol\_class,"KEY"); fprintf(fp\_UST,"%d %s %s %d \n",UST.SR++,UST.name,UST.symbol\_class,UST.index); break; } } fclose(fp\_TT); ch=fgetc(fp\_PGM); while(ch!='"') { ch=fgetc(fp\_PGM); } strcpy(UST.name,buffer); UST.index=TT.index; if(TT.indicator=='Y') strcpy(UST.symbol\_class,"TRM"); else strcpy(UST.symbol\_class,"KEY"); fprintf(fp\_UST,"%d %s %s %d \n",UST.SR++,UST.name,UST.symbol\_class,UST.index); ch=fgetc(fp\_PGM); } } fclose(fp\_PGM); fclose(fp\_UST); fclose(fp\_IT); fclose(fp\_LT); } ``` Error in `./a.out': double free or corruption (!prev): 0x0000000000bb0470 there are multiple files operations in this program it includes source program (it is simple c program without preprocessors), Terminal tale as input and this text files need to be already created before running above code TT.txt => ``` 1 void N 2 main N 3 int N 4 float N 5 printf N 6 scanf N 7 , Y 8 ; Y 9 = Y 10 " Y 11 { Y 12 } Y 13 * Y 14 / Y 15 + Y 16 - Y 17 ( Y 18 ) Y 19 < Y 20 > Y 21 getch N ``` PGM\_LEX.txt => ``` void main() { int i=10,j; printf("%d",i); i=(j/10); getch(); } ``` thanks!<issue_comment>username_1: Beside other problems, valgrind reports ``` ==27338== Invalid read of size 4 ==27338== at 0x4EA57D4: fclose@@GLIBC_2.2.5 (in /usr/lib64/libc-2.26.so) ==27338== by 0x400B32: main (gggg4.c:135) ==27338== Address 0x5211310 is 0 bytes inside a block of size 552 free'd ==27338== at 0x4C2DD18: free (vg_replace_malloc.c:530) ==27338== by 0x4EA587D: fclose@@GLIBC_2.2.5 (in /usr/lib64/libc-2.26.so) ==27338== by 0x400AFB: main (gggg4.c:127) ==27338== Block was alloc'd at ==27338== at 0x4C2CB6B: malloc (vg_replace_malloc.c:299) ==27338== by 0x4EA622C: __fopen_internal (in /usr/lib64/libc-2.26.so) ==27338== by 0x400E00: main (gggg4.c:116) ``` These two lines are ``` 127 fclose(fp_LT); ... 135 fclose(fp_LT); ``` Upvotes: 2 <issue_comment>username_2: As @username_1 mentioned, in line 135 of your code, in the `if(flag_LT==0)` case, you call `fclose(fp_LT)`, but you already closed it in line 127. The tool [cppcheck](http://cppcheck.sourceforge.net/ "cppcheck") also finds this bug. Instead of repeatedly opening and closing the same file, you can use `rewind()` to go back to the start of a file. Also, your program is not checking the result of any of the `fopen()`, `fscanf()` and `fclose()` calls. That means that if you have corrupted data, or there are any errors reading and writing the files, your program will ignore these errors, with possibly bad consequences. Upvotes: 1 <issue_comment>username_3: This section looks suspicious: ``` fclose(fp_LT); // fp_LT is closed strcpy(UST.name,buffer); UST.index=TT.index; strcpy(UST.symbol_class,"LIT"); if(flag_LT==1) fprintf(fp_UST,"%d %s %s %d \n",UST.SR++,UST.name,UST.symbol_class,LT1.SR); if(flag_LT==0) { fclose(fp_LT); // fp_LT is closed again ``` In case `flag_LT` is zero, it seems you call `fclose` twice. That is undefined behavior. Upvotes: 0
2018/03/18
910
2,590
<issue_start>username_0: I am using the code ``` private static Attachment HeroCard() { var hc = new HeroCard { Images=new List { new CardImage(@"C:\Users\.....\imgs\testImage.jpg") } }; return hc.ToAttachment(); } ``` To load an image in a hero card's attachment. This works fine but if I try to use the local folder instead e.x ``` @"~\imgs\testImage.jpg" ``` The image fails to load. I have tried different other formats regarding the path with no success. What am I missing?<issue_comment>username_1: Beside other problems, valgrind reports ``` ==27338== Invalid read of size 4 ==27338== at 0x4EA57D4: fclose@@GLIBC_2.2.5 (in /usr/lib64/libc-2.26.so) ==27338== by 0x400B32: main (gggg4.c:135) ==27338== Address 0x5211310 is 0 bytes inside a block of size 552 free'd ==27338== at 0x4C2DD18: free (vg_replace_malloc.c:530) ==27338== by 0x4EA587D: fclose@@GLIBC_2.2.5 (in /usr/lib64/libc-2.26.so) ==27338== by 0x400AFB: main (gggg4.c:127) ==27338== Block was alloc'd at ==27338== at 0x4C2CB6B: malloc (vg_replace_malloc.c:299) ==27338== by 0x4EA622C: __fopen_internal (in /usr/lib64/libc-2.26.so) ==27338== by 0x400E00: main (gggg4.c:116) ``` These two lines are ``` 127 fclose(fp_LT); ... 135 fclose(fp_LT); ``` Upvotes: 2 <issue_comment>username_2: As @username_1 mentioned, in line 135 of your code, in the `if(flag_LT==0)` case, you call `fclose(fp_LT)`, but you already closed it in line 127. The tool [cppcheck](http://cppcheck.sourceforge.net/ "cppcheck") also finds this bug. Instead of repeatedly opening and closing the same file, you can use `rewind()` to go back to the start of a file. Also, your program is not checking the result of any of the `fopen()`, `fscanf()` and `fclose()` calls. That means that if you have corrupted data, or there are any errors reading and writing the files, your program will ignore these errors, with possibly bad consequences. Upvotes: 1 <issue_comment>username_3: This section looks suspicious: ``` fclose(fp_LT); // fp_LT is closed strcpy(UST.name,buffer); UST.index=TT.index; strcpy(UST.symbol_class,"LIT"); if(flag_LT==1) fprintf(fp_UST,"%d %s %s %d \n",UST.SR++,UST.name,UST.symbol_class,LT1.SR); if(flag_LT==0) { fclose(fp_LT); // fp_LT is closed again ``` In case `flag_LT` is zero, it seems you call `fclose` twice. That is undefined behavior. Upvotes: 0
2018/03/18
1,214
5,329
<issue_start>username_0: I play with cancelation token, and I would like to understand how this works. I have two async methods(in my example two but in theory I can have 100). I want to cancel work in all async methods if one of them throws an exception. My idea is to cancel token in exception where all methods are called. When a token is canceled I would expect that other method stop working, but this is not happening. ``` using System; using System.Threading; using System.Threading.Tasks; namespace CancelationTest { class Program { static void Main(string[] args) { new Test(); Console.ReadKey(); } } public class Test { public Test() { Task.Run(() => MainAsync()); } public static async Task MainAsync() { var cancellationTokenSource = new CancellationTokenSource(); try { var firstTask = FirstAsync(cancellationTokenSource.Token); var secondTask = SecondAsync(cancellationTokenSource.Token); Thread.Sleep(50); Console.WriteLine("Begin"); await secondTask; Console.WriteLine("hello"); await firstTask; Console.WriteLine("world"); Console.ReadKey(); } catch (OperationCanceledException e) { Console.WriteLine("Main OperationCanceledException cancel"); } catch (Exception e) { Console.WriteLine("Main Exception + Cancel"); cancellationTokenSource.Cancel(); } } public static async Task FirstAsync(CancellationToken c) { c.ThrowIfCancellationRequested(); await Task.Delay(1000, c); Console.WriteLine("Exception in first call"); throw new NotImplementedException("Exception in first call"); } public static async Task SecondAsync(CancellationToken c) { c.ThrowIfCancellationRequested(); await Task.Delay(15000, c); Console.WriteLine("SecondAsync is finished"); } } } ``` The second method finish work and delay task for 15 seconds even when the first method throws an exception. What is result: > > Begin > > > Exception in the first call > > > SecondAsync is finished > > > hello > > > Main Exception + Cancel > > > I would expect that secondAsync stop delay and throw OperationCancelException. I would expect this result: > > Begin > > > Exception in first call > > > Main Exception + Cancel > > > Main OperationCanceledException cancel > > > Where am I making mistake? Why method SecondAsync is fully executed and doesn't throw an exception? And if I change the order of SecondAsync and FirstAsync than Second method stop to delay when the token is canceled and throw an exception.<issue_comment>username_1: Your [`CancellationTokenSource`](https://msdn.microsoft.com/en-us/library/system.threading.cancellationtokensource(v=vs.110).aspx) needs to be in a scope that is accessible to the methods being called, and you need to call `CancellationTokenSource.Cancel()` in order to cancel all of the operations using that source. You can also call `CancellationTokenSource.CancelAfter(TimeSpan)` to cancel it on a delay. Upvotes: 0 <issue_comment>username_2: Because the relevant part of your code is: ``` try { ... await secondTask; await firstTask; } catch(...) { source.Cancel(); } ``` Now while the firstTask is started and has thrown, it is awaited *after* the secondTask. The exception won't surface in the caller until the task is awaited. And so the catch clause will only execute after secondTask has already completed. The Cancel() is just happening too late. If you want your firstTask to interrupt the second one you will have to 1. pass the source into FirstAsync and call Cancel() there. A little ugly. 2. change the await structure. I think your sample is a little artificial. Use Parallel.Invoke() or something similar and it will happen quite naturally. Upvotes: 3 [selected_answer]<issue_comment>username_3: The cancellation must be checked on manually. You only do this once by calling `c.ThrowIfCancellationRequested();` which is before the delay. So you will not register any cancellation once the delay started. Instead, you should rather cyclically check for the cancellation in a loop with whatever loop time is apropiate for your application: ``` public static async Task SecondAsync(CancellationToken c) { for (int i = 0; i<150; i++) { c.ThrowIfCancellationRequested(); await Task.Delay(100, c); } c.ThrowIfCancellationRequested(); Console.WriteLine("SecondAsync is finished"); } ``` In production code you might want to insert the cancellation check every now and then if you have a very linear workflow. Edit: Also, as <NAME> pointed out in [their answer](https://stackoverflow.com/a/49347948/15432738), the exception propagated through `await firstAsync` cannot be catched before `await secondAsnc` finished in your code. Upvotes: 0
2018/03/18
1,187
4,928
<issue_start>username_0: **This code sends the token in the email** ``` [HttpPost] [AllowAnonymous] [ValidateAntiForgeryToken] public async Task ForgotPassword(ForgotPasswordViewModel model) { if (ModelState.IsValid) { var user = await UserManager.FindByNameAsync(model.Email); if (user == null || !(await UserManager.IsEmailConfirmedAsync(user.Id))) { return View("ForgotPasswordConfirmation"); } var code = await UserManager.GenerateEmailConfirmationTokenAsync(user.Id); var client = new SmtpClient(send is OK) { Credentials = new NetworkCredential("somemail", "pass"),EnableSsl = true }; var callbackUrl = Url.Action("ResetPassword", "Account", new { userId = user.Id, code = code }, protocol: Request.Url.Scheme); var message = new MailMessage { From = new MailAddress("somemail"),Subject = "Reset pass", Body = " reset pass here : " + callbackUrl, IsBodyHtml = true }; message.To.Add(model.Email); client.Send(message); return RedirectToAction("ForgotPasswordConfirmation", "Account"); } return View(model); } --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ``` **This catch it and send to View (I tried it two ways)** ``` [HttpGet] [AllowAnonymous] public ActionResult ResetPassword(string code) { //return code == null ? View("Error") : View(new ResetPasswordViewModel { Code = code }); return code == null ? View("Error") : View(); } ``` **And here is result "Invalid Token."** ``` [HttpPost] [AllowAnonymous] [ValidateAntiForgeryToken] public async Task ResetPassword(ResetPasswordViewModel model) { if (!ModelState.IsValid) { return View(model); } var user = await UserManager.FindByNameAsync(model.Email); if (user == null) { // Don't reveal that the user does not exist return RedirectToAction("ResetPasswordConfirmation", "Account"); } var result = await UserManager.ResetPasswordAsync(user.Id, model.Code, model.Password); if (result.Succeeded) { return RedirectToAction("ResetPasswordConfirmation", "Account"); } AddErrors(result); return View(); } ``` **But why ? The code is the same when sending ( callbackUrl ) upon takeover (string code) and at the end ( model.Code ).** So I am really confused. Here is a lot of solution but there is the only problem with the null code or different code. I have good code and still error Invalid Token. Please have someone any ideas?<issue_comment>username_1: Your [`CancellationTokenSource`](https://msdn.microsoft.com/en-us/library/system.threading.cancellationtokensource(v=vs.110).aspx) needs to be in a scope that is accessible to the methods being called, and you need to call `CancellationTokenSource.Cancel()` in order to cancel all of the operations using that source. You can also call `CancellationTokenSource.CancelAfter(TimeSpan)` to cancel it on a delay. Upvotes: 0 <issue_comment>username_2: Because the relevant part of your code is: ``` try { ... await secondTask; await firstTask; } catch(...) { source.Cancel(); } ``` Now while the firstTask is started and has thrown, it is awaited *after* the secondTask. The exception won't surface in the caller until the task is awaited. And so the catch clause will only execute after secondTask has already completed. The Cancel() is just happening too late. If you want your firstTask to interrupt the second one you will have to 1. pass the source into FirstAsync and call Cancel() there. A little ugly. 2. change the await structure. I think your sample is a little artificial. Use Parallel.Invoke() or something similar and it will happen quite naturally. Upvotes: 3 [selected_answer]<issue_comment>username_3: The cancellation must be checked on manually. You only do this once by calling `c.ThrowIfCancellationRequested();` which is before the delay. So you will not register any cancellation once the delay started. Instead, you should rather cyclically check for the cancellation in a loop with whatever loop time is apropiate for your application: ``` public static async Task SecondAsync(CancellationToken c) { for (int i = 0; i<150; i++) { c.ThrowIfCancellationRequested(); await Task.Delay(100, c); } c.ThrowIfCancellationRequested(); Console.WriteLine("SecondAsync is finished"); } ``` In production code you might want to insert the cancellation check every now and then if you have a very linear workflow. Edit: Also, as <NAME> pointed out in [their answer](https://stackoverflow.com/a/49347948/15432738), the exception propagated through `await firstAsync` cannot be catched before `await secondAsnc` finished in your code. Upvotes: 0
2018/03/18
1,021
3,674
<issue_start>username_0: I am trying to run the following inside a powershell script: ``` ffmpeg.exe -hide_banner -i .\input.mkv -passlogfile input -c:v libvpx-vp9 -b:v 0 -crf 31 -tile-columns 6 -tile-rows 2 -threads 8 -pass 2 -speed 1 -frame-parallel 1 -row-mt 1 -c:a libopus -b:a 256000 -c:s copy -af aformat=channel_layouts=5.1 -auto-alt-ref 1 -lag-in-frames 25 -y output.mkv ``` This works fine on either cmd or powershell line directly, but not if I try to run it inside a .ps1 with &. I get the following error: ``` Unrecognized option '-hide_banner -i .\input.mkv -passlogfile input -c:v libvpx-vp9 -b:v 0 -crf 31 -tile-columns 6 -tile-rows 2 -threads 8 -pass 2 -speed 1 -frame-parallel 1 -row-mt 1 -c:a libopus -b:a 256000 -c:s copy -auto-alt-ref 1 -lag-in-frames 25 -y -af aformat=channel_layouts=5.1 output.mkv'. Error splitting the argument list: Option not found ``` Digging around a bit it seems that the double = with the -af afilter=channel\_layouts=5.1 is pissing powershell off and I have no clue how to get around it. Already tried escaping it in some ways with no luck. Is there any way I can pass these kind of arguments to my exe without powershell complaining to be unable to split up the arguments? Don't understand the shell tries it in the first place anyway as it should all go to my ffmpeg.exe in the first place.<issue_comment>username_1: Your [`CancellationTokenSource`](https://msdn.microsoft.com/en-us/library/system.threading.cancellationtokensource(v=vs.110).aspx) needs to be in a scope that is accessible to the methods being called, and you need to call `CancellationTokenSource.Cancel()` in order to cancel all of the operations using that source. You can also call `CancellationTokenSource.CancelAfter(TimeSpan)` to cancel it on a delay. Upvotes: 0 <issue_comment>username_2: Because the relevant part of your code is: ``` try { ... await secondTask; await firstTask; } catch(...) { source.Cancel(); } ``` Now while the firstTask is started and has thrown, it is awaited *after* the secondTask. The exception won't surface in the caller until the task is awaited. And so the catch clause will only execute after secondTask has already completed. The Cancel() is just happening too late. If you want your firstTask to interrupt the second one you will have to 1. pass the source into FirstAsync and call Cancel() there. A little ugly. 2. change the await structure. I think your sample is a little artificial. Use Parallel.Invoke() or something similar and it will happen quite naturally. Upvotes: 3 [selected_answer]<issue_comment>username_3: The cancellation must be checked on manually. You only do this once by calling `c.ThrowIfCancellationRequested();` which is before the delay. So you will not register any cancellation once the delay started. Instead, you should rather cyclically check for the cancellation in a loop with whatever loop time is apropiate for your application: ``` public static async Task SecondAsync(CancellationToken c) { for (int i = 0; i<150; i++) { c.ThrowIfCancellationRequested(); await Task.Delay(100, c); } c.ThrowIfCancellationRequested(); Console.WriteLine("SecondAsync is finished"); } ``` In production code you might want to insert the cancellation check every now and then if you have a very linear workflow. Edit: Also, as <NAME> pointed out in [their answer](https://stackoverflow.com/a/49347948/15432738), the exception propagated through `await firstAsync` cannot be catched before `await secondAsnc` finished in your code. Upvotes: 0
2018/03/18
441
1,407
<issue_start>username_0: All I want to do is add an image to the centre of my view in Swift Playgrounds. I have tried every solution on the internet in which none of them works. Where do I put my image file? What do I code to actually display it? Here is an example of code that doesn't work: ``` let image = UIImage(named: "egg.png") let imageView = UIImageView(image: image) imageView.frame = CGRect(x: 0, y: 0, width: 100, height: 100) view.addSubview(imageView) ```<issue_comment>username_1: You must make sure your images are in the Resources Folder. Access these by clicking Command + 1. Upvotes: 0 <issue_comment>username_2: You drag it into the Resources folder, then access it from the Bundle. Sample code: ``` //: A UIKit based Playground for presenting user interface import UIKit import PlaygroundSupport let view = UIView(frame: CGRect(x: 0, y: 0, width: 916, height: 611)) view.backgroundColor = .white let imageView = UIImageView(frame: CGRect(x: 0, y: 0, width: 916, height: 611)) view.addSubview(imageView) if let sample = Bundle.main.path(forResource: "sample", ofType: "jpg") { let image = UIImage(contentsOfFile: sample) imageView.image = image } PlaygroundPage.current.liveView = view PlaygroundPage.current.liveView ``` Here's the result: [![playground result](https://i.stack.imgur.com/1XvSs.png)](https://i.stack.imgur.com/1XvSs.png) Upvotes: 2 [selected_answer]
2018/03/18
1,377
4,574
<issue_start>username_0: basically I have a form of four HTML checkboxes. If the one with value "one" is ticked I want to disable the one with value "two" and visa versa. The rest of the checkboxes operate as normal. I've almost got it there but when I untick the box it doesn't re-enable the other again. HTML ``` ``` Script ``` $( "input" ).change(function() { var val = $(this).val(); if(val == "one") { $("input[value='two']").prop( "disabled", true ); } else if(val== "two") { $("input[value='one']").prop( "disabled", true ); } }); ``` [Fiddle link](https://jsfiddle.net/znx4agwc/24/)<issue_comment>username_1: You can use `$(this).is(":checked")` to assign on `prop("disabled"..` ```js $(function() { $('input').change(function() { var val = $(this).val(); if (val == "one") { $("input[value='two']").prop("disabled", $(this).is(":checked")); } else if (val == "two") { $("input[value='one']").prop("disabled", $(this).is(":checked")); } }); }); ``` ```html ``` Upvotes: 1 [selected_answer]<issue_comment>username_2: > > I've almost got it there but when I untick the box it doesn't re-enable the other again. > > > You haven't handled that case in your handler, all of your calls to `prop` use `true` as the value (e.g., they all disable). Instead, use `this.checked` to use `true` when the current checkbox is checked, or `false` when it isn't: ```js $("input").change(function() { var val = $(this).val(); if (val == "one") { $("input[value='two']").prop("disabled", this.checked); } else if (val == "two") { $("input[value='one']").prop("disabled", this.checked); } }); ``` ```html ``` We can also make that a bit shorter, since the two branches do exactly the same thing, just with different selectors: ```js $("input").change(function() { var val = $(this).val(); if (val === "one" || val === "two") { var other = val == "one" ? "two" : "one"; $("input[value='" + other + "']").prop("disabled", this.checked); } }); ``` ```html ``` Or we could even do it so that the HTML defines the relationship between the checkboxes using a `data-*` attribute: ```js $("input").change(function() { var other = $(this).attr("data-disable"); if (other) { $("input[value='" + other + "']").prop("disabled", this.checked); } }); ``` ```html ``` (Note: You could also use [`data`](http://api.jquery.com/data/) here, but there's no reason to unless you need the additional features it adds. It's **not** just an accessor for `data-*` attributes; it's both more and less than that.) Upvotes: 0 <issue_comment>username_3: ``` $( "input" ).change(function(e) { var val = $(this).val(); if(val == "one") { $("input[value='two']").prop( "disabled", e.target.checked ); } else if(val== "two") { $("input[value='one']").prop( "disabled", e.target.checked ); } }); ``` this will do the magic pay attention: you should pass `e` - event argument, and check its state `e.target.checked` Upvotes: 0 <issue_comment>username_4: This should work our for you: [Fiddle Link](https://jsfiddle.net/znx4agwc/43/) ```js $( "input" ).change(function() { var val = $(this).val(); if(val == "one") { if(!$("input[value='two']").is('[disabled]')){ $("input[value='two']").prop( "disabled", true ); }else{ $("input[value='two']").prop( "disabled", false ); } } else if(val== "two") { if(!$("input[value='one']").is('[disabled]')){ $("input[value='one']").prop( "disabled", true ); }else{ $("input[value='one']").prop( "disabled", false ); } } }); ``` ```html ``` Upvotes: 1 <issue_comment>username_5: You need to use the current state of the checkbox. To accomplish that, you can use the function **[`$.is()`](http://api.jquery.com/is/)** along with the flag `:checked` *This alternative uses **[`data-attribute`](https://api.jquery.com/data/)** to make it shorter.* With **[`data-attribute`](https://api.jquery.com/data/)** you're not mixed up the values and the logic to disable/enable other checkboxes. Therefore, the checkboxes can handle any values avoiding modifications in your disable/enable logic. ```js // This way, you're separating the disable/enable logic. $("input").change(function() { var target = $(this).data('target'); if (target) $("input[data-disenable='" + target + "']").prop("disabled", $(this).is(':checked')); }); ``` ```html ``` Upvotes: 1
2018/03/18
1,738
5,836
<issue_start>username_0: I'm using the extensions news and eventnews, so if have two different "types" of news now. The news extension comes with the fields related and related\_from. I want to have also fields related\_event and related\_event\_from. The original related fields should store relations to news which are just news, the new fields relations to news which are events. The difference is made in field "is\_event", so I added the foreign\_table\_where clause to the TCA. For storing the data I have to use the same mm table. Unfortunately on saving only the second field is respected, the first one is not. How can I achieve storing both? Will I need to use a TCA hook or is it possible with standard TCA settings or ..? My TCA as of now: ``` 'related' => [ 'exclude' => 1, 'l10n_mode' => 'mergeIfNotBlank', 'label' => 'LLL:EXT:news/Resources/Private/Language/locallang_db.xlf:tx_news_domain_model_news.related', 'config' => [ 'type' => 'select', 'allowed' => 'tx_news_domain_model_news', 'foreign_table' => 'tx_news_domain_model_news', 'foreign_table_where' => 'AND tx_news_domain_model_news.deleted = 0 AND tx_news_domain_model_news.hidden = 0 AND tx_news_domain_model_news.is_event = 0 ORDER BY tx_news_domain_model_news.datetime DESC', 'MM_opposite_field' => 'related_from', 'size' => 5, 'minitems' => 0, 'maxitems' => 100, 'MM' => 'tx_news_domain_model_news_related_mm', 'wizards' => array( 'suggest' => array( 'type' => 'suggest', 'default' => array( 'searchWholePhrase' => TRUE ) ), ), ] ], 'related_event' => [ 'exclude' => 1, 'l10n_mode' => 'mergeIfNotBlank', 'label' => 'LLL:EXT:dreipc_myadlershof/Resources/Private/Language/locallang_db.xlf:tx_dreipcmyadlershof_domain_model_news.related_event', 'config' => [ 'type' => 'select', 'allowed' => 'tx_news_domain_model_news', 'foreign_table' => 'tx_news_domain_model_news', 'foreign_table_where' => 'AND tx_news_domain_model_news.deleted = 0 AND tx_news_domain_model_news.hidden = 0 AND tx_news_domain_model_news.is_event = 1 AND tx_news_domain_model_news.enable = 1 AND tx_news_domain_model_news.sample = 0 ORDER BY tx_news_domain_model_news.datetime DESC', 'MM_opposite_field' => 'related_event_from', 'size' => 5, 'minitems' => 0, 'maxitems' => 100, 'MM' => 'tx_news_domain_model_news_related_mm', 'wizards' => array( 'suggest' => array( 'type' => 'suggest', 'default' => array( 'searchWholePhrase' => TRUE ) ), ), ] ], ```<issue_comment>username_1: You should give MM\_match\_fields a try <https://docs.typo3.org/typo3cms/TCAReference/ColumnsConfig/Type/Select.html#mm-match-fields> here is an example: <https://typo3blogger.de/tca-advanced-mm_match_fields-subquery-sorting/> Upvotes: 2 <issue_comment>username_2: The by using MM\_match\_fields working TCA for this both fields now looks like: ``` 'related' => [ 'label' => 'LLL:EXT:dreipc_myadlershof/Resources/Private/Language/locallang_db.xlf:tx_dreipcmyadlershof_domain_model_news.related', 'config' => [ 'type' => 'select', 'renderType' => 'selectMultipleSideBySide', 'foreign_table' => 'tx_news_domain_model_news', 'foreign_table_where' => 'AND tx_news_domain_model_news.uid != ###THIS_UID### AND tx_news_domain_model_news.deleted = 0 AND tx_news_domain_model_news.hidden = 0 AND tx_news_domain_model_news.is_event = 0 AND tx_news_domain_model_news.enable = 1 AND tx_news_domain_model_news.sample = 0 ORDER BY tx_news_domain_model_news.datetime DESC', 'MM' => 'tx_news_domain_model_news_related_mm', 'MM_match_fields' => [ 'fieldname' => 'related', ], 'size' => 20, 'minitems' => 0, 'maxitems' => 100, 'wizards' => [ 'suggest' => [ 'type' => 'suggest', 'default' => [ 'searchWholePhrase' => true ] ], ], ] ], 'related_event' => [ 'label' => 'LLL:EXT:dreipc_myadlershof/Resources/Private/Language/locallang_db.xlf:tx_dreipcmyadlershof_domain_model_news.related_event', 'config' => [ 'type' => 'select', 'renderType' => 'selectMultipleSideBySide', 'foreign_table' => 'tx_news_domain_model_news', 'foreign_table_where' => 'AND tx_news_domain_model_news.uid != ###THIS_UID### AND tx_news_domain_model_news.deleted = 0 AND tx_news_domain_model_news.hidden = 0 AND tx_news_domain_model_news.is_event = 1 AND tx_news_domain_model_news.enable = 1 AND tx_news_domain_model_news.sample = 0 ORDER BY tx_news_domain_model_news.datetime DESC', 'MM' => 'tx_news_domain_model_news_related_mm', 'MM_match_fields' => [ 'fieldname' => 'related_event', ], 'size' => 20, 'minitems' => 0, 'maxitems' => 100, 'wizards' => [ 'suggest' => [ 'type' => 'suggest', 'default' => [ 'searchWholePhrase' => true, 'addWhere' => ' AND tx_news_domain_model_news.uid != ###THIS_UID### AND tx_news_domain_model_news.is_event = 1' ] ] ], ] ], ``` For that we also needed to add the field **fieldname** to the table **tx\_news\_domain\_model\_news\_related\_mm** Upvotes: 3 [selected_answer]
2018/03/18
1,102
3,734
<issue_start>username_0: I have a list of generic objects with some attributes and one of them it is a Set (treeSet) of Integers. ``` class GenericObject{ Set orderNumbers = new TreeSet<>(); Other attributes... } ``` I want to order this list of generic objects comparing the Set... example: My actual List list; GenericObject with Set --> 2,3,4 GenericObject with Set --> 1,2 GenericObject with Set --> 4,5 GenericObject with Set --> 2,3 I want order the list of object like that: 1) 1,2 2) 2,3 3) 2,3,4 4) 4,5 I am trying to implements the Comparator of GenericObject and Overriding the method compare but I have not got it. Thanks, I'll be waiting for your answers<issue_comment>username_1: You should give MM\_match\_fields a try <https://docs.typo3.org/typo3cms/TCAReference/ColumnsConfig/Type/Select.html#mm-match-fields> here is an example: <https://typo3blogger.de/tca-advanced-mm_match_fields-subquery-sorting/> Upvotes: 2 <issue_comment>username_2: The by using MM\_match\_fields working TCA for this both fields now looks like: ``` 'related' => [ 'label' => 'LLL:EXT:dreipc_myadlershof/Resources/Private/Language/locallang_db.xlf:tx_dreipcmyadlershof_domain_model_news.related', 'config' => [ 'type' => 'select', 'renderType' => 'selectMultipleSideBySide', 'foreign_table' => 'tx_news_domain_model_news', 'foreign_table_where' => 'AND tx_news_domain_model_news.uid != ###THIS_UID### AND tx_news_domain_model_news.deleted = 0 AND tx_news_domain_model_news.hidden = 0 AND tx_news_domain_model_news.is_event = 0 AND tx_news_domain_model_news.enable = 1 AND tx_news_domain_model_news.sample = 0 ORDER BY tx_news_domain_model_news.datetime DESC', 'MM' => 'tx_news_domain_model_news_related_mm', 'MM_match_fields' => [ 'fieldname' => 'related', ], 'size' => 20, 'minitems' => 0, 'maxitems' => 100, 'wizards' => [ 'suggest' => [ 'type' => 'suggest', 'default' => [ 'searchWholePhrase' => true ] ], ], ] ], 'related_event' => [ 'label' => 'LLL:EXT:dreipc_myadlershof/Resources/Private/Language/locallang_db.xlf:tx_dreipcmyadlershof_domain_model_news.related_event', 'config' => [ 'type' => 'select', 'renderType' => 'selectMultipleSideBySide', 'foreign_table' => 'tx_news_domain_model_news', 'foreign_table_where' => 'AND tx_news_domain_model_news.uid != ###THIS_UID### AND tx_news_domain_model_news.deleted = 0 AND tx_news_domain_model_news.hidden = 0 AND tx_news_domain_model_news.is_event = 1 AND tx_news_domain_model_news.enable = 1 AND tx_news_domain_model_news.sample = 0 ORDER BY tx_news_domain_model_news.datetime DESC', 'MM' => 'tx_news_domain_model_news_related_mm', 'MM_match_fields' => [ 'fieldname' => 'related_event', ], 'size' => 20, 'minitems' => 0, 'maxitems' => 100, 'wizards' => [ 'suggest' => [ 'type' => 'suggest', 'default' => [ 'searchWholePhrase' => true, 'addWhere' => ' AND tx_news_domain_model_news.uid != ###THIS_UID### AND tx_news_domain_model_news.is_event = 1' ] ] ], ] ], ``` For that we also needed to add the field **fieldname** to the table **tx\_news\_domain\_model\_news\_related\_mm** Upvotes: 3 [selected_answer]
2018/03/18
1,055
3,227
<issue_start>username_0: Im trying to assign values from stdin (that I get using `char *gets(char *str)`) in the `while` loop but it doesn't seem to work. I have seven `automobile` struct` and I want that at every iteration the variable I'm triyng to fill with the input changes form a1.marca to a2.marca. I tried this strategy but it doesn't seem to work. ``` #include #include struct automobile { char marca[15]; char modello[20]; char targa[7]; unsigned cilindrata; } a1, a2, a3, a4, a5, a6, a7; int main() { int Nauto=7, a=0; char const id[3]={'a', '1', '\0'}; a=1; while (Nauto>0) { printf ("Inserisci i dati della %d%c auto\n", a, 167); printf ("Marca: "); gets(id.marca); printf ("Modello: "); gets(id.modello); printf ("Targa: "); gets(id.targa); printf ("Cilindrata: "); scanf ("%d", &id.cilindrata); a++; Nauto--; id[2]=a; } return 0; } ```<issue_comment>username_1: You should have an array of automobiles and then in the while loop you can index the array. A for loop will be easier than a while loop: ``` struct automobile { char marca[15]; char modello[20]; char targa[7]; unsigned cilindrata; } a[6]; for (int i=0; i<6; i++) { fgets(a[i].marca, sizeof(a[i].marca), stdin); // .... } ``` Note the use of `fgets`, which is more safe than `gets` as you can specify the number of characters to read. The safest way to specifcy the number of characters to read is by using `sizeof`. Here it takes `sizeof(a[i].marca)` charaters. The compiler will replace this with the size in compile-time, even though `a[i]` looks like runtime. This is safest because if you later decide to change the size of `marca`, the size to read is changed here automatically. The specification of `fgets` says it reads up to `n-1` charactes so there will be room for the terminating null character of the string. For the meaning of the last parameter I refer you to the documentation of `fgets`. Read it carefully to understand its behaviour. Upvotes: 3 [selected_answer]<issue_comment>username_2: I got what you were trying to do. In C, it's not practical to allocate static structs and using it like that. Instead, you can allocate it dynamically. Here's my running code example below (I tried to maintain your code ): ``` #include #include #define maxNauto 7 typedef struct { char marca[15]; char modello[20]; char targa[7]; unsigned cilindrata; } automobile; int main() { int Nauto=0, a=0; a=1; automobile\*\* autos = (automobile\*\*) malloc(maxNauto \* sizeof(automobile\*)); // Allocates an array of automobiles to // Support each index of an automobile struct while (Nauto < maxNauto) { autos[Nauto] = (automobile\*) malloc(sizeof(automobile)); // Allocate each automobile struct printf ("Inserisci i dati della %d%c auto\n", a, 167); printf ("Marca: "); getc(stdin); // Gets the '\n' char from the stdin, so it's not detected as an input in fgets or scanf fgets(autos[Nauto]->marca, 15, stdin); printf ("Modello: "); fgets(autos[Nauto]->modello, 20, stdin); printf ("Targa: "); fgets(autos[Nauto]->targa, 7, stdin); printf ("Cilindrata: "); scanf ("%d", &(autos[Nauto]->cilindrata)); a++; Nauto++; } return 0; } ``` Upvotes: -1
2018/03/18
797
2,510
<issue_start>username_0: I upload 12 image to imgur every half hour. But meet this issue. > > Imgur ERROR message: {'exception': [], 'code': 429, 'type': > 'ImgurException', 'message': 'You are uploading too fast. Please wait > -0 more minutes.'} > > > I don't understand why is -0 minutes. I guess I reach my rate limit, so try to see my rate limit. But it look normal. ``` { "data": { "UserLimit": 2000, "UserRemaining": 2000, "UserReset": 1521378886, "ClientLimit": 12500, "ClientRemaining": 12500 }, "success": true, "status": 200 } ``` I try in different machine in same CLIENTID, the upload function work fine. Did my IP be banned by imgur? When will they free my IP? UPDATE: I find out each IP imgur will treat different user. But my machine curl credits api. The response look fine. ``` curl --request GET \ --url 'https://api.imgur.com/3/credits' \ --header 'Authorization: Client-ID xxx' {"data":{"UserLimit":500,"UserRemaining":497,"UserReset":1521394842,"ClientLimit":12500,"ClientRemaining":12457},"success":true,"status":200}⏎ ``` When I run upload image. It still will show > > Imgur ERROR message: {'message': 'You are uploading too fast. Please > wait -0 more minutes.', 'type': 'ImgurException', 'code': 429, > 'exception': []} > > > However, other image function work fine such as get image.<issue_comment>username_1: Apparently "[there is an upload limit of 50 images per IP address per hour](https://help.imgur.com/hc/en-us/articles/115000083326-What-files-can-I-upload-What-is-the-size-limit-)". I don't know why they don't state this on the main [api.imgur.com](https://api.imgur.com/#limits) page, it's essential information for an API... Upvotes: 5 [selected_answer]<issue_comment>username_2: I had the same problem, but, it was not related to my IP. I checked the file extension and it was a `.heic` file extension - which is not valid for Imgur. After checking the [source](https://www.reddit.com/r/shortcuts/comments/9lmsqw/upload_to_imgur_file_type_invalid_1_error/), I saved the image to a valid file extension - in my case, to `.png`. By changing the file extension to a valid one, I was able to upload the image. Source: [Upload to Imgur "File type invalid (1)" error](https://www.reddit.com/r/shortcuts/comments/9lmsqw/upload_to_imgur_file_type_invalid_1_error/) on Reddit. Upvotes: 0
2018/03/18
1,818
3,917
<issue_start>username_0: I have the following X and y matrices: [![enter image description here](https://i.stack.imgur.com/8sKrW.png)](https://i.stack.imgur.com/8sKrW.png) for which I want to calculate the best value for theta for a linear regression equation using the normal equation approach with: theta = inv(X^T \* X) \* X^T \* y the results for theta should be : [188.400,0.3866,-56.128,-92.967,-3.737] I implement the steps with: ``` X=np.matrix([[1,1,1,1],[2104,1416,1534,852],[5,3,3,2],[1,2,2,1],[45,41,30,36]]) y=np.matrix([460,232,315,178]) XT=np.transpose(X) XTX=XT.dot(X) inv=np.linalg.inv(XTX) inv_XT=inv.dot(XT) theta=inv_XT.dot(y) print(theta) ``` But I dont't get the desired results. Instead it throws an error with: > > Traceback (most recent call last): File "C:/", line 19, in > theta=inv\_XT.dot(y) ValueError: shapes (4,5) and (1,4) not aligned: 5 (dim 1) != 1 (dim 0) > > > What am I doing wrong?<issue_comment>username_1: I think you have messed up dimensions a little bit. Your `X` is actually a `XT` and `XT` is `X`. Try this: ``` In [163]: X=np.matrix([[1,1,1,1],[2104,1416,1534,852],[5,3,3,2],[1,2,2,1],[45,41,30,36]]).T In [164]: y=np.matrix([460,232,315,178]) In [165]: X Out[165]: matrix([[ 1, 2104, 5, 1, 45], [ 1, 1416, 3, 2, 41], [ 1, 1534, 3, 2, 30], [ 1, 852, 2, 1, 36]]) In [166]: XT = X.T In [167]: np.linalg.inv(XT @ X) @ XT @ y.T Out[167]: matrix([[243.4453125 ], [ -0.47787476], [268.609375 ], [ 3.1328125 ], [ -5.83056641]]) ``` **UPDATE:** this approach gives values that are closer to your desired values: ``` In [197]: (np.linalg.inv(X @ X.T) @ X).T @ y.T Out[197]: matrix([[182.27200269], [ 0.34497234], [-38.43393186], [-82.90625955], [ -3.84484213]]) ``` **UPDATE2:** how to create a correct matrix initially: ``` In [217]: np.array([[1, 2104, 5, 1, 45], ...: [1, 1416, 3, 2, 41], ...: [1, 1534, 3, 2, 30], ...: [1, 852, 2, 1, 36]]) ...: Out[217]: array([[ 1, 2104, 5, 1, 45], [ 1, 1416, 3, 2, 41], [ 1, 1534, 3, 2, 30], [ 1, 852, 2, 1, 36]]) ``` Upvotes: 2 <issue_comment>username_2: I have solved the problem by using [numpy.linalg.pinv()](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.linalg.pinv.html) that is the "pseudo-inverse" instead of numpy.linalg.inv() for the inverting of the matrix since the ducumentation sais: > > "The pseudo-inverse of a matrix A, denoted A^+, is defined as: “the > matrix that ‘solves’ [the least-squares problem] Ax = b,” i.e., if > \bar{x} is said solution, then A^+ is that matrix such that \bar{x} = > A^+b." > > > and solving the *least-squares problem* is exactly what I want to achieve in the context of linear regression. Consequently the code is: ``` X=np.matrix([[1,2104,5,1,45],[1,1416,3,2,40],[1,1534,3,2,30],[1,852,2,1,36]]) y=np.matrix([[460],[232],[315],[178]]) XT=X.T XTX=XT@X inv=np.linalg.pinv(XTX) theta=(inv@XT)@y print(theta) [[188.40031946] [ 0.3866255 ] [-56.13824955] [-92.9672536 ] [ -3.73781915]] ``` **Edit:** There is also the possibility of regularization to get rid of the problem of NoN-invertibility by changing the normal-equation to: theta = (XT@X + **lambda\*matrix**)^(-1)@XT@y where **lambda** is a real number and called *regularization parameter* and **matrix** is a (n+1 x n+1) dimensional matrix of the shape: ``` 0 0 0 0 ... 0 0 0 1 0 0 ... 0 0 0 0 1 0 ... 0 0 0 0 0 1 ... 0 0 . . . 0 0 0 0 0 0 0 1 ``` That is a [eye()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.eye.html) matrix with the element [0,0] set to 0 More about the concept of regularization can be read [here](http://www.dmi.unict.it/farinella/SMM/Lectures/09Dic2015_2.pdf) Upvotes: 1 [selected_answer]
2018/03/18
6,730
16,564
<issue_start>username_0: This seems a stupid question, and I have been looking around in the forum for a similar question, but I still don't understand. I have to select the `NAME` of the employee with the minimum wage for his work ![enter image description here](https://i.stack.imgur.com/l5ub5.jpg) Below is my table named `DIPENDENTI`: ``` "MATRICOLA" (ID) "NOME_IMP" (NAME) "MANSIONE" (WORK) "SUP" "DATA_ASS" "STIPENDIO" (WAGE) "COMMISS" "NUM_DEPART" ```<issue_comment>username_1: This should work: ``` select [nome_imp] from dipendenti where STIPENDIO in ( select min(STIPENDIO) from dipendenti ) ``` If you want to split it up between each type of "work" then try this: ``` select nome_imp,b.mansione,b.wage from dipendenti a inner join (select min(STIPENDIO) as wage,mansione from dipendenti group by mansione ) b on b.mansione = a.mansione and b.wage = a.STIPENDIO ``` Upvotes: 0 <issue_comment>username_2: ``` drop TABLE IF EXISTS DIPENDENTI; drop TABLE IF EXISTS DIPART; CREATE TABLE DIPART ( NUM_DIPART SMALLINT primary key, NOME_DIPART CHAR(14), CITTA CHAR(13)) ENGINE = InnoDB; CREATE TABLE DIPENDENTI ( MATRICOLA SMALLINT primary key, NOME_IMP CHAR(10), MANSIONE CHAR(11), SUP SMALLINT, DATA_ASS DATE, STIPENDIO INTEGER, COMMISS INTEGER, NUM_DIPART SMALLINT, FOREIGN KEY (NUM_DIPART) REFERENCES DIPART(NUM_DIPART) ) ENGINE = InnoDB; -- -- inserimento delle tuple nella tabella DIPART -- INSERT INTO DIPART VALUES (15,'SICUREZZA','FIRENZE'); INSERT INTO DIPART VALUES (12,'AMBIENTE','MANTOVA'); INSERT INTO DIPART VALUES (10,'DIREZIONE','ROMA'); INSERT INTO DIPART VALUES (20,'RICERCHE','FIRENZE'); INSERT INTO DIPART VALUES (30,'VENDITE','MILANO'); INSERT INTO DIPART VALUES (40,'CONTABILITA''','TORINO'); INSERT INTO DIPART VALUES (11,'PIANIFICAZIONE','ROMA'); INSERT INTO DIPART VALUES (50,'CONTROLLO','TORINO'); -- -- inserimento tuple nella tabella DIPENDENTI -- INSERT INTO DIPENDENTI VALUES (7369,'ORTU','IMPIEGATO',7902,'1980-12-17',800000,NULL,20); INSERT INTO DIPENDENTI VALUES (7499,'VILLANI','VENDITORE',NULL,'1981-02-20',1600000,300000,30); INSERT INTO DIPENDENTI VALUES (7521,'VILLA','VENDITORE',7698,'1981-02-12',1250000,500000,30); INSERT INTO DIPENDENTI VALUES (7566,'FIRMANI','DIRIGENTE',7839,'1981-12-31',2975000,NULL,20); INSERT INTO DIPENDENTI VALUES (7654,'MAGRINI','VENDITORE',7698,'2001-09-30',1250000,1400000,30); INSERT INTO DIPENDENTI VALUES (7698,'ROSSI','DIRIGENTE',NULL,'1981-05-15',2850000,NULL,30); INSERT INTO DIPENDENTI VALUES (7782,'NOVELLI','DIRIGENTE',7839,'2006-03-24',2450000,NULL,10); INSERT INTO DIPENDENTI VALUES (7788,'GAGGI','ANALISTA',7566,'1999-12-23',3000000,NULL,20); INSERT INTO DIPENDENTI VALUES (7839,'GIGLIO','PRESIDENTE',NULL,'2001-12-12',5000000,NULL,10); INSERT INTO DIPENDENTI VALUES (7844,'ADRIANI','VENDITORE',7698,'1981-05-24',1500000,0,30); INSERT INTO DIPENDENTI VALUES (7902,'ORLANDI','ANALISTA',7566,'1982-8-30',900000,NULL,20); INSERT INTO DIPENDENTI VALUES (7934,'ZUCCHI','IMPIEGATO',7782,'1980-11-12',1300000,NULL,10); INSERT INTO DIPENDENTI VALUES (7370,'ROSSI','FUNZIONARIO',7470,'2003-01-23',850000,NULL,11); INSERT INTO DIPENDENTI VALUES (7470,'VERDI','FUNZIONARIO',7654,'2000-10-10',1650000,300000,15); INSERT INTO DIPENDENTI VALUES (7471,'VILLA','VENDITORE',7788,'1993-03-21',1250000,500000,11); INSERT INTO DIPENDENTI VALUES (7570,'VILLARI','DIRIGENTE',7370,'1981-11-12',2975000,NULL,20); INSERT INTO DIPENDENTI VALUES (7670,'NEGRINI','FUNZIONARIO',NULL,'1981-12-31',1250000,1400000,50); INSERT INTO DIPENDENTI VALUES (7675,'ROSSI','DIRIGENTE',NULL,'1989-01-25',2850000,NULL,15); INSERT INTO DIPENDENTI VALUES (7770,'NERINI','DIRIGENTE',NULL,'2002-05-21',2450000,NULL,50); INSERT INTO DIPENDENTI VALUES (7790,'GAGGIULO','DIRIGENTE',7566,'2005-04-14',3000000,NULL,11); INSERT INTO DIPENDENTI VALUES (7800,'GIGLIOLI','PRESIDENTE',NULL,'2001-01-12',5000000,NULL,11); INSERT INTO DIPENDENTI VALUES (7870,'ADRIANU','DIRIGENTE',7698,'2005-08-31',1500000,0,50); INSERT INTO DIPENDENTI VALUES (7971,'ORLANDINI','ANALISTA',7566,'2006-02-27',900000,NULL,20); INSERT INTO DIPENDENTI VALUES (7970,'ETTI','IMPIEGATO',7782,'2003-10-11',1300000,NULL,10); INSERT INTO DIPENDENTI VALUES (7972,'MILANESI','ANALISTA',NULL,'2010-10-11',1350000,NULL,NULL); -- -- Aggiungo ora il vincolo di integritàreferenziale -- ALTER TABLE DIPENDENTI add FOREIGN KEY (SUP) REFERENCES DIPENDENTI (MATRICOLA); -- -- creazione delle tabelle dei FORNITORI, PRODOTTI,PARTI, FORNITURE, SPEDIZIONI -- e SPED_DETTAGLI -- le tabelle hanno opportuni vincoli di integritàreferenziali -- drop TABLE IF EXISTS SPED_DETTAGLI; drop TABLE IF EXISTS SPEDIZIONI; drop TABLE IF EXISTS FORNITURE; DROP TABLE IF EXISTS FORNITORI; DROP TABLE IF EXISTS PRODOTTI; DROP TABLE IF EXISTS PARTI; CREATE TABLE FORNITORI ( COD CHAR(4) PRIMARY KEY, NOME CHAR(13), CITTA CHAR(13)); CREATE TABLE PARTI ( COD CHAR(4) PRIMARY KEY, NOME CHAR(20), COLORE CHAR(20), PESO INTEGER, CITTA CHAR(20)); CREATE TABLE PRODOTTI ( COD CHAR(4) PRIMARY KEY, NOME CHAR(13), CITTA CHAR(10)); CREATE TABLE FORNITURE (FCOD CHAR(4), PCOD CHAR(4), PRCOD CHAR(4), QUANTITA INTEGER, PRIMARY KEY (FCOD,PCOD,PRCOD), FOREIGN KEY (FCOD) REFERENCES FORNITORI(COD), FOREIGN KEY (PCOD) REFERENCES PARTI(COD), FOREIGN KEY (PRCOD) REFERENCES PRODOTTI(COD)); CREATE TABLE SPEDIZIONI (SPCOD CHAR(4) PRIMARY KEY, DATASP DATE, CITTADEST CHAR(20), CITTAPART CHAR(20), QTA_TOTALE SMALLINT); CREATE TABLE SPED_DETTAGLI (SPCOD CHAR(4) , FCOD CHAR(4) , PCOD CHAR(4) , PRCOD CHAR(4) , PRIMARY KEY (SPCOD,FCOD,PCOD,PRCOD), FOREIGN KEY (SPCOD) REFERENCES SPEDIZIONI(SPCOD), FOREIGN KEY (FCOD,PCOD,PRCOD) REFERENCES FORNITURE(FCOD,PCOD,PRCOD)); -- -- inserimento tuple in PARTI -- INSERT INTO PARTI VALUES ('P034', 'RUOTA1', 'NERO', 120000, 'PARMA'); INSERT INTO PARTI VALUES ('P002','BULLONE','ROSSO', NULL, 'MILANO'); INSERT INTO PARTI VALUES ('P005','MICROFONO','BIANCO' , 340, 'GENOVA'); INSERT INTO PARTI VALUES ('P006','AURICOLARE','NERO' , 120, 'MILANO'); INSERT INTO PARTI VALUES ('P009','CORPO_TASTIERA','BEIGE' , 400, 'MILANO'); INSERT INTO PARTI VALUES ('P033','RUOTA2','ROSSO' , 140000, NULL); INSERT INTO PARTI VALUES ('P001','VITE','ROSSO' , 10, 'ROMA'); INSERT INTO PARTI VALUES ('P004','CORNETTA',NULL, 900, 'ROMA'); INSERT INTO PARTI VALUES ('P007','TASTO','NERO' , 120, 'PAVIA'); INSERT INTO PARTI VALUES ('P008','TASTO','BIANCO' , 120, 'PAVIA'); INSERT INTO PARTI VALUES ('P010','APPOGGIO_TAST','BEIGE' , 920000, NULL); INSERT INTO PARTI VALUES ('P003','PENNINO','ROSSO' , 100, 'TORINO'); INSERT INTO PARTI VALUES ('P011','ADESIVO','METALLO' , 9, 'TORINO'); -- -- inserimento tuple in PRODOTTI -- INSERT INTO PRODOTTI VALUES ('PR01','TASTIERA_IBM','TORINO'); INSERT INTO PRODOTTI VALUES ('PR02','TASTIERA_DEC',NULL); INSERT INTO PRODOTTI VALUES ('PR03','SCHEDA_PC','MILANO'); INSERT INTO PRODOTTI VALUES ('PR04','SCHEDA_COMP','ROMA'); INSERT INTO PRODOTTI VALUES ('PR05','XT_IBM','ROMA'); INSERT INTO PRODOTTI VALUES ('PR06','M24','TORINO'); INSERT INTO PRODOTTI VALUES ('PR07','AT_IBM','PAVIA'); INSERT INTO PRODOTTI VALUES ('PR08','MAC','TORINO'); INSERT INTO PRODOTTI VALUES ('PR11','TASTIERA_WIRE','TORINO'); INSERT INTO PRODOTTI VALUES ('PR12','TASTIERA_FLES','MILANO'); INSERT INTO PRODOTTI VALUES ('PR13','SCHEDA_AUDIO','MILANO'); INSERT INTO PRODOTTI VALUES ('PR14','SCHEDA_VIDEO','PAVIA'); INSERT INTO PRODOTTI VALUES ('PR15','VAIO','LATINA'); INSERT INTO PRODOTTI VALUES ('PR26','DELL','VENEZIA'); INSERT INTO PRODOTTI VALUES ('PR37','CANON','PAVIA'); INSERT INTO PRODOTTI VALUES ('PR58','NOKIA_SET','FIRENZE'); -- -- inserimento tuple in FORNITORI -- INSERT INTO FORNITORI VALUES ('F001','ROSSI','MILANO'); INSERT INTO FORNITORI VALUES ('F002','NERI','ROMA'); INSERT INTO FORNITORI VALUES ('F003','BIANCHI','MILANO'); INSERT INTO FORNITORI VALUES ('F004','DONATI','ROMA'); INSERT INTO FORNITORI VALUES ('F015','MARIANO','VENEZIA'); INSERT INTO FORNITORI VALUES ('F116','GILARDI','VENEZIA'); INSERT INTO FORNITORI VALUES ('F217','VERDI','PARMA'); INSERT INTO FORNITORI VALUES ('F328','PUCCINI','LUCCA'); INSERT INTO FORNITORI VALUES ('F339','CUGINI',NULL); INSERT INTO FORNITORI VALUES ('F110','LUCINI','TORINO'); INSERT INTO FORNITORI VALUES ('F211','BIANCHI','TORINO'); INSERT INTO FORNITORI VALUES ('F130','BIZET','PAVIA'); INSERT INTO FORNITORI VALUES ('F313','MOSCONI','TORINO'); INSERT INTO FORNITORI VALUES ('F314','ANDREI',NULL); INSERT INTO FORNITORI VALUES ('F315','MONTELATICI','FIRENZE'); INSERT INTO FORNITORI VALUES ('F316','OTTOZ','AOSTA'); INSERT INTO FORNITORI VALUES ('F317','FRENI','MODENA'); INSERT INTO FORNITORI VALUES ('F218','VILLA','ROMA'); INSERT INTO FORNITORI VALUES ('F230','MOSCONI','ROMA'); INSERT INTO FORNITORI VALUES ('F332','ILLO','ROMA'); -- -- inserimento tuple FORNITURE -- INSERT INTO FORNITURE VALUES ('F003','P004','PR05', 100); INSERT INTO FORNITURE VALUES ('F001','P001','PR01', 100); INSERT INTO FORNITURE VALUES ('F001','P001','PR02', 50); INSERT INTO FORNITURE VALUES ('F218','P001','PR08', 200); INSERT INTO FORNITURE VALUES ('F230','P001','PR01', 3000); INSERT INTO FORNITURE VALUES ('F315','P001','PR01', 104); INSERT INTO FORNITURE VALUES ('F003','P006','PR07', 1980); INSERT INTO FORNITURE VALUES ('F003','P001','PR08', 9000); INSERT INTO FORNITURE VALUES ('F003','P001','PR01', 4000); INSERT INTO FORNITURE VALUES ('F313','P002','PR04', 204); INSERT INTO FORNITURE VALUES ('F313','P002','PR05', 124); INSERT INTO FORNITURE VALUES ('F130','P005','PR06', 440); INSERT INTO FORNITURE VALUES ('F317','P002','PR07', 6003); INSERT INTO FORNITURE VALUES ('F003','P002','PR08', 1400); INSERT INTO FORNITURE VALUES ('F116','P001','PR04', 1030); INSERT INTO FORNITURE VALUES ('F004','P001','PR06', 1003); INSERT INTO FORNITURE VALUES ('F004','P001','PR01', 1020); INSERT INTO FORNITURE VALUES ('F116','P001','PR01', 1008); INSERT INTO FORNITURE VALUES ('F230','P002','PR01', 344); INSERT INTO FORNITURE VALUES ('F230','P003','PR02', 2008); INSERT INTO FORNITURE VALUES ('F314','P001','PR03', 9008); INSERT INTO FORNITURE VALUES ('F314','P003','PR07', 1008); INSERT INTO FORNITURE VALUES ('F315','P006','PR01', 10000); INSERT INTO FORNITURE VALUES ('F316','P003','PR02', 90020); INSERT INTO FORNITURE VALUES ('F217','P004','PR03', 10); INSERT INTO FORNITURE VALUES ('F317','P007','PR04', 456899); INSERT INTO FORNITURE VALUES ('F116','P002','PR04', 1030); INSERT INTO FORNITURE VALUES ('F116','P001','PR03', 1003); INSERT INTO FORNITURE VALUES ('F116','P002','PR02', 1020); INSERT INTO FORNITURE VALUES ('F116','P007','PR01', 1008); INSERT INTO FORNITURE VALUES ('F315','P003','PR03', 344); INSERT INTO FORNITURE VALUES ('F230','P033','PR02', 2008); INSERT INTO FORNITURE VALUES ('F314','P033','PR03', 9008); INSERT INTO FORNITURE VALUES ('F314','P033','PR07', 1008); INSERT INTO FORNITURE VALUES ('F315','P033','PR01', 10000); INSERT INTO FORNITURE VALUES ('F001','P033','PR58', 90020); INSERT INTO FORNITURE VALUES ('F217','P001','PR58', 10); INSERT INTO FORNITURE VALUES ('F317','P008','PR58', 456899); -- -- inserimento tuple SPEDIZIONI -- INSERT INTO SPEDIZIONI VALUES('SP01','2005-10-01','MILANO','MILANO',200); INSERT INTO SPEDIZIONI VALUES('SP02','2005-11-01','MILANO','TORINO',150); INSERT INTO SPEDIZIONI VALUES('SP03','2005-12-01','TORINO','MILANO',300); INSERT INTO SPEDIZIONI VALUES('SP04','2005-10-02','VENEZIA','BOLOGNA',10); INSERT INTO SPEDIZIONI VALUES('SP05','2006-10-01','BOLOGNA','BOLOGNA',330); INSERT INTO SPEDIZIONI VALUES('SP06','2006-10-01','VENEZIA','ROMA',50); INSERT INTO SPEDIZIONI VALUES('SP07','2007-01-01','ROMA','NAPOLI',NULL); INSERT INTO SPEDIZIONI VALUES('SP08','2007-02-17','ROMA','VENEZIA',1000); INSERT INTO SPEDIZIONI VALUES('SP09','2007-03-11','NAPOLI','NAPOLI',2000); -- -- inserimento tuple SPED_DETTAGLI -- INSERT INTO SPED_DETTAGLI VALUES ('SP01','F003','P004','PR05'); INSERT INTO SPED_DETTAGLI VALUES ('SP01','F001','P001','PR01'); INSERT INTO SPED_DETTAGLI VALUES ('SP01','F001','P001','PR02'); INSERT INTO SPED_DETTAGLI VALUES ('SP01','F218','P001','PR08'); INSERT INTO SPED_DETTAGLI VALUES ('SP01','F230','P001','PR01'); INSERT INTO SPED_DETTAGLI VALUES ('SP02','F315','P001','PR01'); INSERT INTO SPED_DETTAGLI VALUES ('SP02','F003','P006','PR07'); INSERT INTO SPED_DETTAGLI VALUES ('SP03','F003','P001','PR08'); INSERT INTO SPED_DETTAGLI VALUES ('SP03','F003','P001','PR01'); INSERT INTO SPED_DETTAGLI VALUES ('SP04','F313','P002','PR04'); INSERT INTO SPED_DETTAGLI VALUES ('SP04','F313','P002','PR05'); INSERT INTO SPED_DETTAGLI VALUES ('SP04','F130','P005','PR06'); INSERT INTO SPED_DETTAGLI VALUES ('SP04','F317','P002','PR07'); INSERT INTO SPED_DETTAGLI VALUES ('SP04','F003','P002','PR08'); INSERT INTO SPED_DETTAGLI VALUES ('SP04','F116','P001','PR04'); INSERT INTO SPED_DETTAGLI VALUES ('SP04','F004','P001','PR06'); INSERT INTO SPED_DETTAGLI VALUES ('SP04','F004','P001','PR01'); INSERT INTO SPED_DETTAGLI VALUES ('SP04','F116','P001','PR01'); INSERT INTO SPED_DETTAGLI VALUES ('SP04','F230','P002','PR01'); INSERT INTO SPED_DETTAGLI VALUES ('SP05','F230','P003','PR02'); INSERT INTO SPED_DETTAGLI VALUES ('SP05','F314','P001','PR03'); INSERT INTO SPED_DETTAGLI VALUES ('SP05','F314','P003','PR07'); INSERT INTO SPED_DETTAGLI VALUES ('SP05','F315','P006','PR01'); INSERT INTO SPED_DETTAGLI VALUES ('SP05','F316','P003','PR02'); INSERT INTO SPED_DETTAGLI VALUES ('SP05','F217','P004','PR03'); INSERT INTO SPED_DETTAGLI VALUES ('SP05','F317','P007','PR04'); INSERT INTO SPED_DETTAGLI VALUES ('SP05','F116','P002','PR04'); INSERT INTO SPED_DETTAGLI VALUES ('SP05','F116','P001','PR03'); INSERT INTO SPED_DETTAGLI VALUES ('SP05','F116','P002','PR02'); INSERT INTO SPED_DETTAGLI VALUES ('SP05','F116','P001','PR01'); INSERT INTO SPED_DETTAGLI VALUES ('SP05','F315','P003','PR03'); INSERT INTO SPED_DETTAGLI VALUES ('SP06','F230','P033','PR02'); INSERT INTO SPED_DETTAGLI VALUES ('SP07','F314','P033','PR03'); INSERT INTO SPED_DETTAGLI VALUES ('SP07','F314','P033','PR07'); INSERT INTO SPED_DETTAGLI VALUES ('SP07','F315','P033','PR01'); INSERT INTO SPED_DETTAGLI VALUES ('SP07','F001','P033','PR58'); INSERT INTO SPED_DETTAGLI VALUES ('SP08','F217','P001','PR58'); INSERT INTO SPED_DETTAGLI VALUES ('SP08','F317','P008','PR58'); ``` Upvotes: 0 <issue_comment>username_3: From your sample data, you can try this SQL statement. Get `MIN` wage and MATRICOLA(PK) on subquery and `join` self you will get the minimum wage for his work. ``` SELECT a.* FROM DIPENDENTI a INNER JOIN ( SELECT MIN(STIPENDIO),MATRICOLA FROM DIPENDENTI ) b ON b.MATRICOLA = a.MATRICOLA; ``` [SQLFiddle](http://sqlfiddle.com/#!9/a16b21/19) Upvotes: 1 <issue_comment>username_2: Thank to everyone, i managed to solve the problem, here it is ``` SELECT nome_imp, mansione, stipendio FROM dipendenti WHERE DIPENDENTI.stipendio = (SELECT MIN(stipendio) FROM dipendenti d2 WHERE DIPENDENTI.mansione = d2.mansione) ``` Upvotes: 0
2018/03/18
8,665
24,508
<issue_start>username_0: [enter image description here](https://i.stack.imgur.com/Vlloa.png)In my application I have allowed my users to save their details into the firebase database I was then trying to allow the users to view their details in a ListView. ``` package com.example.aaron.inthehole; import android.content.Intent; import android.os.Bundle; import android.provider.ContactsContract; import android.support.annotation.NonNull; import android.support.annotation.Nullable; import android.support.v7.app.AppCompatActivity; import android.util.Log; import android.view.View; import android.widget.ArrayAdapter; import android.widget.Button; import android.widget.ListView; import android.widget.Toast; import com.google.firebase.auth.FirebaseAuth; import com.google.firebase.auth.FirebaseUser; import com.google.firebase.database.DataSnapshot; import com.google.firebase.database.DatabaseError; import com.google.firebase.database.DatabaseReference; import com.google.firebase.database.FirebaseDatabase; import com.google.firebase.database.ValueEventListener; import java.util.ArrayList; public class view_database extends AppCompatActivity { private static final String TAG = "ViewDatabase"; //add Firebase Database stuff private FirebaseDatabase mFirebaseDatabase; private FirebaseAuth mAuth; private FirebaseAuth.AuthStateListener mAuthListener; private DatabaseReference myRef; private String userID; private ListView mListView; @Override protected void onCreate(@Nullable Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_view_database); mListView = (ListView) findViewById(R.id.listview); //declare the database reference object. This is what we use to access the database. //NOTE: Unless you are signed in, this will not be useable. mAuth = FirebaseAuth.getInstance(); mFirebaseDatabase = FirebaseDatabase.getInstance(); myRef = mFirebaseDatabase.getReference(); FirebaseUser user = mAuth.getCurrentUser(); assert user != null; userID = user.getUid(); mAuthListener = new FirebaseAuth.AuthStateListener() { @Override public void onAuthStateChanged(@NonNull FirebaseAuth firebaseAuth) { FirebaseUser user = firebaseAuth.getCurrentUser(); if (user != null) { // User is signed in Log.d(TAG, "onAuthStateChanged:signed_in:" + user.getUid()); } else { // User is signed out Log.d(TAG, "onAuthStateChanged:signed_out"); toastMessage("Successfully signed out."); } // ... } }; myRef.addValueEventListener(new ValueEventListener() { @Override public void onDataChange(DataSnapshot dataSnapshot) { // This method is called once with the initial value and again // whenever data at this location is updated. showData(dataSnapshot); } @Override public void onCancelled(DatabaseError databaseError) { } }); } /*private void showData(DataSnapshot dataSnapshot) { for(DataSnapshot ds : dataSnapshot.getChildren()){ UserInformation uInfo = new UserInformation(); uInfo.setName(ds.child(userID).getValue(UserInformation.class).getName()); //set the name uInfo.setHandicap(ds.child(userID).getValue(UserInformation.class).getHandicap()); //set the name uInfo.setAge(ds.child(userID).getValue(UserInformation.class).getAge()); //set the email uInfo.setGender(ds.child(userID).getValue(UserInformation.class).getGender()); //set the phone_num //display all the information Log.d(TAG, "showData: name: " + uInfo.getName()); Log.d(TAG, "showData: age: " + uInfo.getAge()); Log.d(TAG, "showData: handicap: " + uInfo.getHandicap()); Log.d(TAG, "showData: gender: " + uInfo.getGender()); ArrayList array = new ArrayList<>(); array.add("Full Name:"); array.add(uInfo.getName()); array.add("Age:"); array.add(uInfo.getAge()); array.add("Handicap:"); array.add(uInfo.getHandicap()); array.add("Gender:"); array.add(uInfo.getGender()); ArrayAdapter adapter = new ArrayAdapter(this,android.R.layout.simple\_list\_item\_1,array); mListView.setAdapter(adapter); } } \*/ private void showData(DataSnapshot dataSnapshot) { ArrayList array = new ArrayList<>(); for(DataSnapshot ds : dataSnapshot.getChildren()){ UserInformation uInfo = ds.getValue(UserInformation.class); array.add(uInfo.getName() + " / " + uInfo.getAge()); } ArrayAdapter adapter = new ArrayAdapter(this,android.R.layout.simple\_list\_item\_1,array); mListView.setAdapter(adapter); } @Override public void onStart() { super.onStart(); mAuth.addAuthStateListener(mAuthListener); } @Override public void onStop() { super.onStop(); if (mAuthListener != null) { mAuth.removeAuthStateListener(mAuthListener); } } /\*\* \* customizable toast \* @param message \*/ private void toastMessage(String message){ Toast.makeText(this,message,Toast.LENGTH\_SHORT).show(); } } ``` public class UserInformation { ``` private String name; private String age; private String gender; private String handicap; public UserInformation(){ } public String getAge() { return age; } public void setAge(String age) { this.age = age; } public String getName() { return name; } public void setName(String name) { this.name = name; } public String getGender() { return gender; } public void setGender(String gender) { this.gender = gender; } public String getHandicap() { return handicap; } public void setHandicap(String handicap) { this.handicap = handicap; } ``` } When I run my application and enter the view details page an error occurs and in the run a message comes up saying: ``` E/AndroidRuntime: FATAL EXCEPTION: main Process: com.example.aaron.inthehole, PID: 8958 java.lang.NullPointerException: Attempt to invoke virtual method 'java.lang.String com.example.aaron.inthehole.UserInformation.getName()' on a null object reference at com.example.aaron.inthehole.view_database.showData(view_database.java:94) at com.example.aaron.inthehole.view_database.access$100(view_database.java:31) at com.example.aaron.inthehole.view_database$2.onDataChange(view_database.java:80) at com.google.android.gms.internal.zzegf.zza(Unknown Source:13) at com.google.android.gms.internal.zzeia.zzbyc(Unknown Source:2) at com.google.android.gms.internal.zzeig.run(Unknown Source:65) at android.os.Handler.handleCallback(Handler.java:790) at android.os.Handler.dispatchMessage(Handler.java:99) at android.os.Looper.loop(Looper.java:164) at android.app.ActivityThread.main(ActivityThread.java:6494) at java.lang.reflect.Method.invoke(Native Method) at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:438) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:807) ``` This is the part of my applications where the details are stored ``` String name1 = name.getText().toString(); String handicap1 = handicap.getText().toString(); String age1 = age.getText().toString(); String gender1 = gender.getText().toString(); String user_id = mAuth.getCurrentUser().getUid(); DatabaseReference current_user_db = FirebaseDatabase.getInstance().getReference().child("Users").child(user_id); Map newPost = new HashMap(); newPost.put("name",name1); newPost.put("handicap",handicap1); newPost.put("age",age1); newPost.put("gender",gender1); current_user_db.setValue(newPost); Toast.makeText(ProfileActivity.this, "Details Saved", Toast.LENGTH_SHORT).show(); ```<issue_comment>username_1: This should work: ``` select [nome_imp] from dipendenti where STIPENDIO in ( select min(STIPENDIO) from dipendenti ) ``` If you want to split it up between each type of "work" then try this: ``` select nome_imp,b.mansione,b.wage from dipendenti a inner join (select min(STIPENDIO) as wage,mansione from dipendenti group by mansione ) b on b.mansione = a.mansione and b.wage = a.STIPENDIO ``` Upvotes: 0 <issue_comment>username_2: ``` drop TABLE IF EXISTS DIPENDENTI; drop TABLE IF EXISTS DIPART; CREATE TABLE DIPART ( NUM_DIPART SMALLINT primary key, NOME_DIPART CHAR(14), CITTA CHAR(13)) ENGINE = InnoDB; CREATE TABLE DIPENDENTI ( MATRICOLA SMALLINT primary key, NOME_IMP CHAR(10), MANSIONE CHAR(11), SUP SMALLINT, DATA_ASS DATE, STIPENDIO INTEGER, COMMISS INTEGER, NUM_DIPART SMALLINT, FOREIGN KEY (NUM_DIPART) REFERENCES DIPART(NUM_DIPART) ) ENGINE = InnoDB; -- -- inserimento delle tuple nella tabella DIPART -- INSERT INTO DIPART VALUES (15,'SICUREZZA','FIRENZE'); INSERT INTO DIPART VALUES (12,'AMBIENTE','MANTOVA'); INSERT INTO DIPART VALUES (10,'DIREZIONE','ROMA'); INSERT INTO DIPART VALUES (20,'RICERCHE','FIRENZE'); INSERT INTO DIPART VALUES (30,'VENDITE','MILANO'); INSERT INTO DIPART VALUES (40,'CONTABILITA''','TORINO'); INSERT INTO DIPART VALUES (11,'PIANIFICAZIONE','ROMA'); INSERT INTO DIPART VALUES (50,'CONTROLLO','TORINO'); -- -- inserimento tuple nella tabella DIPENDENTI -- INSERT INTO DIPENDENTI VALUES (7369,'ORTU','IMPIEGATO',7902,'1980-12-17',800000,NULL,20); INSERT INTO DIPENDENTI VALUES (7499,'VILLANI','VENDITORE',NULL,'1981-02-20',1600000,300000,30); INSERT INTO DIPENDENTI VALUES (7521,'VILLA','VENDITORE',7698,'1981-02-12',1250000,500000,30); INSERT INTO DIPENDENTI VALUES (7566,'FIRMANI','DIRIGENTE',7839,'1981-12-31',2975000,NULL,20); INSERT INTO DIPENDENTI VALUES (7654,'MAGRINI','VENDITORE',7698,'2001-09-30',1250000,1400000,30); INSERT INTO DIPENDENTI VALUES (7698,'ROSSI','DIRIGENTE',NULL,'1981-05-15',2850000,NULL,30); INSERT INTO DIPENDENTI VALUES (7782,'NOVELLI','DIRIGENTE',7839,'2006-03-24',2450000,NULL,10); INSERT INTO DIPENDENTI VALUES (7788,'GAGGI','ANALISTA',7566,'1999-12-23',3000000,NULL,20); INSERT INTO DIPENDENTI VALUES (7839,'GIGLIO','PRESIDENTE',NULL,'2001-12-12',5000000,NULL,10); INSERT INTO DIPENDENTI VALUES (7844,'ADRIANI','VENDITORE',7698,'1981-05-24',1500000,0,30); INSERT INTO DIPENDENTI VALUES (7902,'ORLANDI','ANALISTA',7566,'1982-8-30',900000,NULL,20); INSERT INTO DIPENDENTI VALUES (7934,'ZUCCHI','IMPIEGATO',7782,'1980-11-12',1300000,NULL,10); INSERT INTO DIPENDENTI VALUES (7370,'ROSSI','FUNZIONARIO',7470,'2003-01-23',850000,NULL,11); INSERT INTO DIPENDENTI VALUES (7470,'VERDI','FUNZIONARIO',7654,'2000-10-10',1650000,300000,15); INSERT INTO DIPENDENTI VALUES (7471,'VILLA','VENDITORE',7788,'1993-03-21',1250000,500000,11); INSERT INTO DIPENDENTI VALUES (7570,'VILLARI','DIRIGENTE',7370,'1981-11-12',2975000,NULL,20); INSERT INTO DIPENDENTI VALUES (7670,'NEGRINI','FUNZIONARIO',NULL,'1981-12-31',1250000,1400000,50); INSERT INTO DIPENDENTI VALUES (7675,'ROSSI','DIRIGENTE',NULL,'1989-01-25',2850000,NULL,15); INSERT INTO DIPENDENTI VALUES (7770,'NERINI','DIRIGENTE',NULL,'2002-05-21',2450000,NULL,50); INSERT INTO DIPENDENTI VALUES (7790,'GAGGIULO','DIRIGENTE',7566,'2005-04-14',3000000,NULL,11); INSERT INTO DIPENDENTI VALUES (7800,'GIGLIOLI','PRESIDENTE',NULL,'2001-01-12',5000000,NULL,11); INSERT INTO DIPENDENTI VALUES (7870,'ADRIANU','DIRIGENTE',7698,'2005-08-31',1500000,0,50); INSERT INTO DIPENDENTI VALUES (7971,'ORLANDINI','ANALISTA',7566,'2006-02-27',900000,NULL,20); INSERT INTO DIPENDENTI VALUES (7970,'ETTI','IMPIEGATO',7782,'2003-10-11',1300000,NULL,10); INSERT INTO DIPENDENTI VALUES (7972,'MILANESI','ANALISTA',NULL,'2010-10-11',1350000,NULL,NULL); -- -- Aggiungo ora il vincolo di integritàreferenziale -- ALTER TABLE DIPENDENTI add FOREIGN KEY (SUP) REFERENCES DIPENDENTI (MATRICOLA); -- -- creazione delle tabelle dei FORNITORI, PRODOTTI,PARTI, FORNITURE, SPEDIZIONI -- e SPED_DETTAGLI -- le tabelle hanno opportuni vincoli di integritàreferenziali -- drop TABLE IF EXISTS SPED_DETTAGLI; drop TABLE IF EXISTS SPEDIZIONI; drop TABLE IF EXISTS FORNITURE; DROP TABLE IF EXISTS FORNITORI; DROP TABLE IF EXISTS PRODOTTI; DROP TABLE IF EXISTS PARTI; CREATE TABLE FORNITORI ( COD CHAR(4) PRIMARY KEY, NOME CHAR(13), CITTA CHAR(13)); CREATE TABLE PARTI ( COD CHAR(4) PRIMARY KEY, NOME CHAR(20), COLORE CHAR(20), PESO INTEGER, CITTA CHAR(20)); CREATE TABLE PRODOTTI ( COD CHAR(4) PRIMARY KEY, NOME CHAR(13), CITTA CHAR(10)); CREATE TABLE FORNITURE (FCOD CHAR(4), PCOD CHAR(4), PRCOD CHAR(4), QUANTITA INTEGER, PRIMARY KEY (FCOD,PCOD,PRCOD), FOREIGN KEY (FCOD) REFERENCES FORNITORI(COD), FOREIGN KEY (PCOD) REFERENCES PARTI(COD), FOREIGN KEY (PRCOD) REFERENCES PRODOTTI(COD)); CREATE TABLE SPEDIZIONI (SPCOD CHAR(4) PRIMARY KEY, DATASP DATE, CITTADEST CHAR(20), CITTAPART CHAR(20), QTA_TOTALE SMALLINT); CREATE TABLE SPED_DETTAGLI (SPCOD CHAR(4) , FCOD CHAR(4) , PCOD CHAR(4) , PRCOD CHAR(4) , PRIMARY KEY (SPCOD,FCOD,PCOD,PRCOD), FOREIGN KEY (SPCOD) REFERENCES SPEDIZIONI(SPCOD), FOREIGN KEY (FCOD,PCOD,PRCOD) REFERENCES FORNITURE(FCOD,PCOD,PRCOD)); -- -- inserimento tuple in PARTI -- INSERT INTO PARTI VALUES ('P034', 'RUOTA1', 'NERO', 120000, 'PARMA'); INSERT INTO PARTI VALUES ('P002','BULLONE','ROSSO', NULL, 'MILANO'); INSERT INTO PARTI VALUES ('P005','MICROFONO','BIANCO' , 340, 'GENOVA'); INSERT INTO PARTI VALUES ('P006','AURICOLARE','NERO' , 120, 'MILANO'); INSERT INTO PARTI VALUES ('P009','CORPO_TASTIERA','BEIGE' , 400, 'MILANO'); INSERT INTO PARTI VALUES ('P033','RUOTA2','ROSSO' , 140000, NULL); INSERT INTO PARTI VALUES ('P001','VITE','ROSSO' , 10, 'ROMA'); INSERT INTO PARTI VALUES ('P004','CORNETTA',NULL, 900, 'ROMA'); INSERT INTO PARTI VALUES ('P007','TASTO','NERO' , 120, 'PAVIA'); INSERT INTO PARTI VALUES ('P008','TASTO','BIANCO' , 120, 'PAVIA'); INSERT INTO PARTI VALUES ('P010','APPOGGIO_TAST','BEIGE' , 920000, NULL); INSERT INTO PARTI VALUES ('P003','PENNINO','ROSSO' , 100, 'TORINO'); INSERT INTO PARTI VALUES ('P011','ADESIVO','METALLO' , 9, 'TORINO'); -- -- inserimento tuple in PRODOTTI -- INSERT INTO PRODOTTI VALUES ('PR01','TASTIERA_IBM','TORINO'); INSERT INTO PRODOTTI VALUES ('PR02','TASTIERA_DEC',NULL); INSERT INTO PRODOTTI VALUES ('PR03','SCHEDA_PC','MILANO'); INSERT INTO PRODOTTI VALUES ('PR04','SCHEDA_COMP','ROMA'); INSERT INTO PRODOTTI VALUES ('PR05','XT_IBM','ROMA'); INSERT INTO PRODOTTI VALUES ('PR06','M24','TORINO'); INSERT INTO PRODOTTI VALUES ('PR07','AT_IBM','PAVIA'); INSERT INTO PRODOTTI VALUES ('PR08','MAC','TORINO'); INSERT INTO PRODOTTI VALUES ('PR11','TASTIERA_WIRE','TORINO'); INSERT INTO PRODOTTI VALUES ('PR12','TASTIERA_FLES','MILANO'); INSERT INTO PRODOTTI VALUES ('PR13','SCHEDA_AUDIO','MILANO'); INSERT INTO PRODOTTI VALUES ('PR14','SCHEDA_VIDEO','PAVIA'); INSERT INTO PRODOTTI VALUES ('PR15','VAIO','LATINA'); INSERT INTO PRODOTTI VALUES ('PR26','DELL','VENEZIA'); INSERT INTO PRODOTTI VALUES ('PR37','CANON','PAVIA'); INSERT INTO PRODOTTI VALUES ('PR58','NOKIA_SET','FIRENZE'); -- -- inserimento tuple in FORNITORI -- INSERT INTO FORNITORI VALUES ('F001','ROSSI','MILANO'); INSERT INTO FORNITORI VALUES ('F002','NERI','ROMA'); INSERT INTO FORNITORI VALUES ('F003','BIANCHI','MILANO'); INSERT INTO FORNITORI VALUES ('F004','DONATI','ROMA'); INSERT INTO FORNITORI VALUES ('F015','MARIANO','VENEZIA'); INSERT INTO FORNITORI VALUES ('F116','GILARDI','VENEZIA'); INSERT INTO FORNITORI VALUES ('F217','VERDI','PARMA'); INSERT INTO FORNITORI VALUES ('F328','PUCCINI','LUCCA'); INSERT INTO FORNITORI VALUES ('F339','CUGINI',NULL); INSERT INTO FORNITORI VALUES ('F110','LUCINI','TORINO'); INSERT INTO FORNITORI VALUES ('F211','BIANCHI','TORINO'); INSERT INTO FORNITORI VALUES ('F130','BIZET','PAVIA'); INSERT INTO FORNITORI VALUES ('F313','MOSCONI','TORINO'); INSERT INTO FORNITORI VALUES ('F314','ANDREI',NULL); INSERT INTO FORNITORI VALUES ('F315','MONTELATICI','FIRENZE'); INSERT INTO FORNITORI VALUES ('F316','OTTOZ','AOSTA'); INSERT INTO FORNITORI VALUES ('F317','FRENI','MODENA'); INSERT INTO FORNITORI VALUES ('F218','VILLA','ROMA'); INSERT INTO FORNITORI VALUES ('F230','MOSCONI','ROMA'); INSERT INTO FORNITORI VALUES ('F332','ILLO','ROMA'); -- -- inserimento tuple FORNITURE -- INSERT INTO FORNITURE VALUES ('F003','P004','PR05', 100); INSERT INTO FORNITURE VALUES ('F001','P001','PR01', 100); INSERT INTO FORNITURE VALUES ('F001','P001','PR02', 50); INSERT INTO FORNITURE VALUES ('F218','P001','PR08', 200); INSERT INTO FORNITURE VALUES ('F230','P001','PR01', 3000); INSERT INTO FORNITURE VALUES ('F315','P001','PR01', 104); INSERT INTO FORNITURE VALUES ('F003','P006','PR07', 1980); INSERT INTO FORNITURE VALUES ('F003','P001','PR08', 9000); INSERT INTO FORNITURE VALUES ('F003','P001','PR01', 4000); INSERT INTO FORNITURE VALUES ('F313','P002','PR04', 204); INSERT INTO FORNITURE VALUES ('F313','P002','PR05', 124); INSERT INTO FORNITURE VALUES ('F130','P005','PR06', 440); INSERT INTO FORNITURE VALUES ('F317','P002','PR07', 6003); INSERT INTO FORNITURE VALUES ('F003','P002','PR08', 1400); INSERT INTO FORNITURE VALUES ('F116','P001','PR04', 1030); INSERT INTO FORNITURE VALUES ('F004','P001','PR06', 1003); INSERT INTO FORNITURE VALUES ('F004','P001','PR01', 1020); INSERT INTO FORNITURE VALUES ('F116','P001','PR01', 1008); INSERT INTO FORNITURE VALUES ('F230','P002','PR01', 344); INSERT INTO FORNITURE VALUES ('F230','P003','PR02', 2008); INSERT INTO FORNITURE VALUES ('F314','P001','PR03', 9008); INSERT INTO FORNITURE VALUES ('F314','P003','PR07', 1008); INSERT INTO FORNITURE VALUES ('F315','P006','PR01', 10000); INSERT INTO FORNITURE VALUES ('F316','P003','PR02', 90020); INSERT INTO FORNITURE VALUES ('F217','P004','PR03', 10); INSERT INTO FORNITURE VALUES ('F317','P007','PR04', 456899); INSERT INTO FORNITURE VALUES ('F116','P002','PR04', 1030); INSERT INTO FORNITURE VALUES ('F116','P001','PR03', 1003); INSERT INTO FORNITURE VALUES ('F116','P002','PR02', 1020); INSERT INTO FORNITURE VALUES ('F116','P007','PR01', 1008); INSERT INTO FORNITURE VALUES ('F315','P003','PR03', 344); INSERT INTO FORNITURE VALUES ('F230','P033','PR02', 2008); INSERT INTO FORNITURE VALUES ('F314','P033','PR03', 9008); INSERT INTO FORNITURE VALUES ('F314','P033','PR07', 1008); INSERT INTO FORNITURE VALUES ('F315','P033','PR01', 10000); INSERT INTO FORNITURE VALUES ('F001','P033','PR58', 90020); INSERT INTO FORNITURE VALUES ('F217','P001','PR58', 10); INSERT INTO FORNITURE VALUES ('F317','P008','PR58', 456899); -- -- inserimento tuple SPEDIZIONI -- INSERT INTO SPEDIZIONI VALUES('SP01','2005-10-01','MILANO','MILANO',200); INSERT INTO SPEDIZIONI VALUES('SP02','2005-11-01','MILANO','TORINO',150); INSERT INTO SPEDIZIONI VALUES('SP03','2005-12-01','TORINO','MILANO',300); INSERT INTO SPEDIZIONI VALUES('SP04','2005-10-02','VENEZIA','BOLOGNA',10); INSERT INTO SPEDIZIONI VALUES('SP05','2006-10-01','BOLOGNA','BOLOGNA',330); INSERT INTO SPEDIZIONI VALUES('SP06','2006-10-01','VENEZIA','ROMA',50); INSERT INTO SPEDIZIONI VALUES('SP07','2007-01-01','ROMA','NAPOLI',NULL); INSERT INTO SPEDIZIONI VALUES('SP08','2007-02-17','ROMA','VENEZIA',1000); INSERT INTO SPEDIZIONI VALUES('SP09','2007-03-11','NAPOLI','NAPOLI',2000); -- -- inserimento tuple SPED_DETTAGLI -- INSERT INTO SPED_DETTAGLI VALUES ('SP01','F003','P004','PR05'); INSERT INTO SPED_DETTAGLI VALUES ('SP01','F001','P001','PR01'); INSERT INTO SPED_DETTAGLI VALUES ('SP01','F001','P001','PR02'); INSERT INTO SPED_DETTAGLI VALUES ('SP01','F218','P001','PR08'); INSERT INTO SPED_DETTAGLI VALUES ('SP01','F230','P001','PR01'); INSERT INTO SPED_DETTAGLI VALUES ('SP02','F315','P001','PR01'); INSERT INTO SPED_DETTAGLI VALUES ('SP02','F003','P006','PR07'); INSERT INTO SPED_DETTAGLI VALUES ('SP03','F003','P001','PR08'); INSERT INTO SPED_DETTAGLI VALUES ('SP03','F003','P001','PR01'); INSERT INTO SPED_DETTAGLI VALUES ('SP04','F313','P002','PR04'); INSERT INTO SPED_DETTAGLI VALUES ('SP04','F313','P002','PR05'); INSERT INTO SPED_DETTAGLI VALUES ('SP04','F130','P005','PR06'); INSERT INTO SPED_DETTAGLI VALUES ('SP04','F317','P002','PR07'); INSERT INTO SPED_DETTAGLI VALUES ('SP04','F003','P002','PR08'); INSERT INTO SPED_DETTAGLI VALUES ('SP04','F116','P001','PR04'); INSERT INTO SPED_DETTAGLI VALUES ('SP04','F004','P001','PR06'); INSERT INTO SPED_DETTAGLI VALUES ('SP04','F004','P001','PR01'); INSERT INTO SPED_DETTAGLI VALUES ('SP04','F116','P001','PR01'); INSERT INTO SPED_DETTAGLI VALUES ('SP04','F230','P002','PR01'); INSERT INTO SPED_DETTAGLI VALUES ('SP05','F230','P003','PR02'); INSERT INTO SPED_DETTAGLI VALUES ('SP05','F314','P001','PR03'); INSERT INTO SPED_DETTAGLI VALUES ('SP05','F314','P003','PR07'); INSERT INTO SPED_DETTAGLI VALUES ('SP05','F315','P006','PR01'); INSERT INTO SPED_DETTAGLI VALUES ('SP05','F316','P003','PR02'); INSERT INTO SPED_DETTAGLI VALUES ('SP05','F217','P004','PR03'); INSERT INTO SPED_DETTAGLI VALUES ('SP05','F317','P007','PR04'); INSERT INTO SPED_DETTAGLI VALUES ('SP05','F116','P002','PR04'); INSERT INTO SPED_DETTAGLI VALUES ('SP05','F116','P001','PR03'); INSERT INTO SPED_DETTAGLI VALUES ('SP05','F116','P002','PR02'); INSERT INTO SPED_DETTAGLI VALUES ('SP05','F116','P001','PR01'); INSERT INTO SPED_DETTAGLI VALUES ('SP05','F315','P003','PR03'); INSERT INTO SPED_DETTAGLI VALUES ('SP06','F230','P033','PR02'); INSERT INTO SPED_DETTAGLI VALUES ('SP07','F314','P033','PR03'); INSERT INTO SPED_DETTAGLI VALUES ('SP07','F314','P033','PR07'); INSERT INTO SPED_DETTAGLI VALUES ('SP07','F315','P033','PR01'); INSERT INTO SPED_DETTAGLI VALUES ('SP07','F001','P033','PR58'); INSERT INTO SPED_DETTAGLI VALUES ('SP08','F217','P001','PR58'); INSERT INTO SPED_DETTAGLI VALUES ('SP08','F317','P008','PR58'); ``` Upvotes: 0 <issue_comment>username_3: From your sample data, you can try this SQL statement. Get `MIN` wage and MATRICOLA(PK) on subquery and `join` self you will get the minimum wage for his work. ``` SELECT a.* FROM DIPENDENTI a INNER JOIN ( SELECT MIN(STIPENDIO),MATRICOLA FROM DIPENDENTI ) b ON b.MATRICOLA = a.MATRICOLA; ``` [SQLFiddle](http://sqlfiddle.com/#!9/a16b21/19) Upvotes: 1 <issue_comment>username_2: Thank to everyone, i managed to solve the problem, here it is ``` SELECT nome_imp, mansione, stipendio FROM dipendenti WHERE DIPENDENTI.stipendio = (SELECT MIN(stipendio) FROM dipendenti d2 WHERE DIPENDENTI.mansione = d2.mansione) ``` Upvotes: 0
2018/03/18
331
1,221
<issue_start>username_0: I always use a Mac at work and build my Spring and Spring Boot apps using command line and using maven installed. So normally I write: ``` mvn clean install ``` Now I was using my girlfriend Windows PC and I don't want to install maven on it. My idea was to use the maven wrapper provided by Spring initializer when you open a new project from IntelliJ IDEA CE. I've tried many combinations by the command line windows within IntelliJ but the command is always not recognized. What am I getting wrong? I've tried: ``` ./mvnw clean ./mvn clean mvnw.cmd clean ./mvnw.cmd clean ``` but with no results at all. I attach the image of the project with the maven wrapper folder opened to show its contents. [![enter image description here](https://i.stack.imgur.com/sxtF8.png)](https://i.stack.imgur.com/sxtF8.png)<issue_comment>username_1: You need to descend one more directory level down from what I can tell in your screenshot. ``` cd C:\Users\DeboraPC\Downloads\demoRossini\demoRossini mvnw clean install ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: In windows cmd `"mvnw clean test"` worked In Intellij terminal `".\mvnw clean test"` worked I hope that helps. Upvotes: 0
2018/03/18
302
981
<issue_start>username_0: I am getting the error `tmpl() is not a function` in my jQuery 3.3.1 project. According to [this page](http://stephenwalther.com/archive/2010/11/30/an-introduction-to-jquery-templates), templates should be part of the core, after version 1.5. This is my HTML code: ``` <tr> <td>${title}</td> <td><span class="fa fa-plus-square"></span>${amount}<span class="fa fa-minus-square"></span></td> </tr> ... ``` and this is my Javascript: ``` $("#itemTemplate").tmpl(item).appendTo("#basket"); ``` I have searched the jQuery site, but couldn't find anything on templates.<issue_comment>username_1: You need to descend one more directory level down from what I can tell in your screenshot. ``` cd C:\Users\DeboraPC\Downloads\demoRossini\demoRossini mvnw clean install ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: In windows cmd `"mvnw clean test"` worked In Intellij terminal `".\mvnw clean test"` worked I hope that helps. Upvotes: 0
2018/03/18
1,512
6,374
<issue_start>username_0: I am thinking about building a system that requires Actors to create a subscription to an Azure Service Bus topic with a filter specific to the Actor instance. My question is, if the Actor (that has the subscription to the Topic) has been deactivated in Service Fabric will it be (re-)activated by a new message being sent by Azure Service Bus? Thank you<issue_comment>username_1: You Actor will not be activated by receiving a message. It is activated by remoting calls and reminders only. So this approach won't work. What you can do is receive messages in a [Service](https://github.com/loekd/ServiceFabric.ServiceBus), and forward them to an Actor instance. Calling an Actor creates the instance(s) on the fly, if needed. Upvotes: 3 [selected_answer]<issue_comment>username_2: Based on [Actor's lifecycle](https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-reliable-actors-lifecycle) it has to be activated. Azure Service Bus message coming from a topic will not activate an actor. Instead, you'd need a supervisor process that would do so. Messages could contain a property to represent the required actor ID. It would also allow to simplify your Azure Service Bus topology by having a single topics and scaled out supervisor. Upvotes: 1 <issue_comment>username_3: This can be easy achived with reminders. Since the actor need to be called first, you can do this. The create method will set the connections string , topic name, subscription name and create them if needed. The reminder will check if the Subscription client its not null and if it is will create it. The reminder will always execute on failure, like this you will be able to control the failures and restart it on crush. <https://github.com/Huachao/azure-content/blob/master/articles/service-fabric/service-fabric-reliable-actors-timers-reminders.md> ``` public async Task CreateAsync(BusOptions options, CancellationToken cancellationToken) { if (options?.ConnectionString == null) { return false; } await StateManager.AddOrUpdateStateAsync("Options", options,(k,v) => v != options? options:v, cancellationToken); var client = new ManagementClient(options.ConnectionString); try { var exist = await client.TopicExistsAsync(options.TopicName, cancellationToken); if (!exist) { await client.CreateTopicAsync(options.TopicName, cancellationToken); } exist = await client.SubscriptionExistsAsync(options.TopicName, options.SubscriptionName, cancellationToken); if (!exist) { await client.CreateSubscriptionAsync(options.TopicName, options.SubscriptionName, cancellationToken); } var rules =await client.GetRulesAsync(options.TopicName,options.SubscriptionName,cancellationToken: cancellationToken); if(rules.FirstOrDefault(x=>x.Name == options.RuleName) == null) { SqlFilter filter = new SqlFilter(options.RuleFilterSqlValue); await client.CreateRuleAsync(options.TopicName, options.SubscriptionName, new RuleDescription(options.RuleName, filter)); } } catch (Exception ex) { ActorEventSource.Current.ActorMessage(this, ex.Message); } return true; } public async Task DeleteAsync(BusOptions options, CancellationToken cancellationToken) { var client = new ManagementClient(options.ConnectionString); try { await client.DeleteRuleAsync(options.TopicName, options.SubscriptionName, options.RuleName, cancellationToken); await client.DeleteSubscriptionAsync(options.TopicName, options.SubscriptionName, cancellationToken); } catch (Exception ex) { ActorEventSource.Current.ActorMessage(this, ex.Message); } } private ISubscriptionClient subscriptionClient; public async Task SendAsync(SendMessage message, CancellationToken cancellationToken) { var options =await StateManager.TryGetStateAsync("Options"); if (!options.HasValue) { ActorEventSource.Current.ActorMessage(this, "First execute CreateAsync. No options set."); return false; } var client = new TopicClient(options.Value.ConnectionString,options.Value.TopicName); var msg = new Message(message.Body); if(message.UserProperties != null) { foreach (var item in message.UserProperties) { msg.UserProperties.Add(item); } } msg.Label = message.Label; await client.SendAsync(msg); await StateManager.AddOrUpdateStateAsync("Messages\_Send", 1, (key, value) => 1 > value ? 1 : value, cancellationToken); return true; } void RegisterOnMessageHandlerAndReceiveMessages() { var messageHandlerOptions = new MessageHandlerOptions(ExceptionReceivedHandler) { MaxConcurrentCalls = 1, AutoComplete = false }; subscriptionClient.RegisterMessageHandler(ProcessMessagesAsync, messageHandlerOptions); } async Task ProcessMessagesAsync(Message message, CancellationToken cancellationToken) { ActorEventSource.Current.ActorMessage(this, message.Label); await subscriptionClient.CompleteAsync(message.SystemProperties.LockToken); } Task ExceptionReceivedHandler(ExceptionReceivedEventArgs exceptionReceivedEventArgs) { var context = exceptionReceivedEventArgs.ExceptionReceivedContext; ActorEventSource.Current.ActorMessage(this, string.Format("Exception context for troubleshooting: - Endpoint: {0}- Entity Path: {1}- Executing Action: {2} - MEssage: {3}", context.Endpoint,context.EntityPath,context,exceptionReceivedEventArgs.Exception.Message)); return Task.CompletedTask; } protected override async Task OnActivateAsync() { ActorEventSource.Current.ActorMessage(this, $"Actor '{Id.GetStringId()}' activated."); IActorReminder Recieve\_Message = await this.RegisterReminderAsync( "Recieve\_Message", null, TimeSpan.FromSeconds(1), //The amount of time to delay before firing the reminder TimeSpan.FromSeconds(1)); } public async Task ReceiveReminderAsync(string reminderName, byte[] state, TimeSpan dueTime, TimeSpan period) { if (reminderName.Equals("Recieve\_Message")) { if(subscriptionClient == null) { var options = await StateManager.TryGetStateAsync("Options"); if (!options.HasValue) { ActorEventSource.Current.ActorMessage(this, "First execute CreateAsync. No options set."); return; } var conn = new ServiceBusConnectionStringBuilder(options.Value.ConnectionString); subscriptionClient = new SubscriptionClient(options.Value.ConnectionString, options.Value.TopicName, options.Value.SubscriptionName); RegisterOnMessageHandlerAndReceiveMessages(); } } } ``` Upvotes: 1
2018/03/18
916
3,307
<issue_start>username_0: I have a controller action method in .NET Web API where at the beginning of method there's a log statement that simply logs **Started** which means that the execution has started. Then, just before returning the response, there's another log statement that logs **Finished** which means that execution has completed. Now, I want to setup a Sumo Logic notification alert if the time difference between the two log events exceeds a specific number e.g. 10 seconds. Basically, what I want to achieve from this is that if my API endpoint takes time more than a specific duration to send response, I want to get notified.<issue_comment>username_1: Pardon me for asking but what prevented you from coding it? Cannot you just subtract start time from end time and take action based on the number of seconds of the outcome? Upvotes: 0 <issue_comment>username_2: I'm not familiar with SumoLogic so don't know if there's a way to have it search the logs for a `Started` and `Ended` event with the same id (i.e. something to indicate the Ended found relates to the same query as the Started) then compare the times. However it looks like it does allow you to fire alerts based on single log entries: <https://help.sumologic.com/Dashboards-and-Alerts/Alerts/03-Create-a-Real-Time-Alert> ``` public T MyApiFunction() { T result; var id = Guid.NewGuid(); //id used so we can tie up related start and end events if that option's possible var startedAt = DateTime.UtcNow; logger.Log("Started", id, startedAt); //... var completedAt = DateTime.UtcNow; logger.Log("Completed", id, completedAt); var secondsTaken = end.Subtract(start).TotalSeconds; if (secondsTaken > AlertThresholdSeconds) { logger.Error(String.Format("API call time exceeded threshold: {0} seconds", secondsTaken),id); } return result; } ``` I suspect there are better options out there / that SumoLogic offers options which monitor the call externally, rather than requiring additional logic in the API's code to handle this. Sadly I couldn't see any obvious documentation for that though. Upvotes: 2 [selected_answer]<issue_comment>username_3: This is possible with the [join](https://help.sumologic.com/Search/Search-Query-Language/Search-Operators/join) operator. Effectively, if you have 2 searches (one for the start event and one for the finish event), you can join them together and then subtract finish time from start time to get a delta and fire an event from that. [Example](https://help.sumologic.com/Search/Search-Query-Language/Search-Operators/join) **Input** `starting stream from stream-2454 starting stream from stream-7343 starting search search-733434 from parent stream stream-2454 starting search search-854343 from parent stream stream-7343 starting stream from stream-6543 starting search search-455563 from parent stream stream-6543 starting search search-32342 from parent stream stream-7343` **Code** `* | join (parse "starting stream from *" AS a) AS T1, (parse "starting search * from parent stream *" AS b, c) AS T2 on T1.a = T2.c` **Results** `a b c stream-2454 search-733434 stream-2454 stream-7343 search-854343 stream-7343 stream-7343 search-32342 stream-7343 stream-6543 search-854343 stream-6543` Upvotes: 0
2018/03/18
990
3,388
<issue_start>username_0: When running `mvn verify` I am getting below message: [![enter image description here](https://i.stack.imgur.com/gJljM.png)](https://i.stack.imgur.com/gJljM.png) I already put the `log4j2.xml` under `src/test/resources` (but not in `src/main/resources` because I do not want it to ship with actual app) as suggested [here](https://stackoverflow.com/a/25487455) to no avail. [![enter image description here](https://i.stack.imgur.com/TaHg2.png)](https://i.stack.imgur.com/TaHg2.png) The HTML report is generated, the log file is written, and the build is successful as seen above. I am unsure where the error is coming from.<issue_comment>username_1: You use the latest Cucumber version 3.15.0 (at the time of writing this). Your problem was resolved as [issue #675](https://github.com/damianszczepanik/cucumber-reporting/issues/675). There was a [commit](https://github.com/damianszczepanik/cucumber-reporting/commit/195347fe0c1b374c9630c130f6fd111ac334bc49) from a [pull request](https://github.com/damianszczepanik/cucumber-reporting/pull/712) meant to fix this problem, but probably you have to wait until the next version is released in order to profit from it - or build a snapshot version locally and see if it also fixes the problem for you. Upvotes: 2 <issue_comment>username_2: According <http://logging.apache.org/log4j/2.x/manual/configuration.html#AutomaticConfiguration> adding `log4j2.xml` to classpath should help. > > If a JSON file cannot be located the XML ConfigurationFactory will try > to locate log4j2.xml on the classpath. > > > So if you still get ERROR message problem not with your sources. It looks like problem in `maven-cucumber-reporting` plugin. As described here <http://maven.apache.org/guides/mini/guide-maven-classloading.html> > > Please note that the plugin classloader does neither contain the dependencies of the current project nor its build output. > > > If maven plugin cannot find any log4j configuration in its own classpath (which is not including your sources/resources) you will see error messages. So yes, the only solution wait for release with fix <https://github.com/damianszczepanik/cucumber-reporting/issues/675> Upvotes: 3 [selected_answer]<issue_comment>username_3: Run -> Run Configurations -> Classpath -> User entries -> Advanced Optiones -> Add Folders -> Then add your folder,in your case src/test/resources -> apply It must work now :D Upvotes: 1 <issue_comment>username_4: You are receiving this error because `cucumber` or its dependencies are using `log4j` for logging and since there is no `log4j` configuration file so `log4j` is printing such message. If there is no `log4j` configuration file, `log4j` uses configuration similar to below - ``` xml version="1.0" encoding="UTF-8"? ``` Adding above configuration in `/src/main/resources/log4j2.xml` will have same result as having no `log4j2.xml` file. Another way to try is to pass `log4j2.xml` file to maven directly without putting it in project classpath. Something like below - ``` set MAVEN_OPTS="-Dlog4j.configurationFile=file:///path/to/log4j2.xml" mvn verify ``` Upvotes: 1 <issue_comment>username_5: according to their documentation, you can use 'joranConfigurator' and load the configuration from a specified file [documentation link](https://logback.qos.ch/manual/configuration.html#joranDirectly) Upvotes: 0
2018/03/18
316
979
<issue_start>username_0: In Python, how can I create a program that calculates the number molecules in a wet piece of water?<issue_comment>username_1: You can use `input` to request a year, the use `str.format` to create a date string ``` def get_year(): year = input('Enter a year: ') return '1/1/{}'.format(year) ``` For example ``` >>> get_year() Enter a year: 1975 '1/1/1975' ``` Once you have your new date, you can just continue to do what you showed in your existing code. Upvotes: 1 <issue_comment>username_2: This might be what you want: ``` from datetime import datetime startyear = input('Please enter a start year: ') startdate = datetime.strptime(startyear+'-01-01', '%Y-%m-%d') today = datetime.now() numberofdays = today - startdate print('The number of days from {0} to today is {1}'.format(startdate.strftime('%Y-%m-%d'), numberofdays.days)) ``` --- Please enter a start year: 2018 The number of days from 2018-01-01 to today is 76 Upvotes: 0
2018/03/18
573
1,820
<issue_start>username_0: I want to store two different sorting expressions into list and then want to execute it using orderBy linq. I have following code. ``` class Product { public int Id { get; set; } public String Name {get; set;} } List GetSortExpressions() { Expression> idSortExpression= p => p.Id; Expression> nameSortExpression = p => p.Name; List sorts = new List(); sorts.Add(idSortExpression); sorts.Add(nameSortExpression ); return sorts; } ``` Then I want to execute sorts list on IQueryable object like below. ``` IQueryable ApplySortExpressions(IQueryable data, List sorts) { for(int i=0;i < sorts.Count; i++) { if(i == 0) { data = data.OrderBy(sorts[i]); } else { data = ((IOrderedQueryable)data).ThenBy(sorts[i]); } } return data; } ``` I can't add expressions to list because of type error. Can you please give me right list generic type for below line ? ``` List sorts = new List(); ``` Thanks in advance.<issue_comment>username_1: You can use `input` to request a year, the use `str.format` to create a date string ``` def get_year(): year = input('Enter a year: ') return '1/1/{}'.format(year) ``` For example ``` >>> get_year() Enter a year: 1975 '1/1/1975' ``` Once you have your new date, you can just continue to do what you showed in your existing code. Upvotes: 1 <issue_comment>username_2: This might be what you want: ``` from datetime import datetime startyear = input('Please enter a start year: ') startdate = datetime.strptime(startyear+'-01-01', '%Y-%m-%d') today = datetime.now() numberofdays = today - startdate print('The number of days from {0} to today is {1}'.format(startdate.strftime('%Y-%m-%d'), numberofdays.days)) ``` --- Please enter a start year: 2018 The number of days from 2018-01-01 to today is 76 Upvotes: 0
2018/03/18
2,542
9,391
<issue_start>username_0: I am beginner at Swift and a student. I want to pop up detailed view when the GMSMarker is tapped. However when I tap GMSMarker, infoView will be called and then, infoView tapped, detailed view shows up on this code. How do I modify this? Isn't it possible to go straight to detailed view from GMSMarker without infoView? ``` //MARK: textfield func textFieldShouldBeginEditing(_ textField: UITextField) -> Bool { let autoCompleteController = GMSAutocompleteViewController() autoCompleteController.delegate = self let filter = GMSAutocompleteFilter() autoCompleteController.autocompleteFilter = filter self.locationManager.startUpdatingLocation() self.present(autoCompleteController, animated: true, completion: nil) return false } // MARK: GOOGLE AUTO COMPLETE DELEGATE func viewController(_ viewController: GMSAutocompleteViewController, didAutocompleteWith place: GMSPlace) { let lat = place.coordinate.latitude let long = place.coordinate.longitude showPartyMarkers(lat: lat, long: long) let camera = GMSCameraPosition.camera(withLatitude: lat, longitude: long, zoom: 17.0) myMapView.camera = camera txtFieldSearch.text=place.formattedAddress chosenPlace = restaurant(name: place.formattedAddress!, lat: lat, long: long) let restaurantMarker=GMSMarker() restaurantMarker.position = CLLocationCoordinate2D(latitude: lat, longitude: long) restaurantMarker.title = "\(place.name)" restaurantMarker.snippet = "\(place.formattedAddress!)" restaurantMarker.map = myMapView self.dismiss(animated: true, completion: nil) // dismiss after place selected } func viewController(_ viewController: GMSAutocompleteViewController, didFailAutocompleteWithError error: Error) { print("ERROR AUTO COMPLETE \(error)") } func wasCancelled(_ viewController: GMSAutocompleteViewController) { self.dismiss(animated: true, completion: nil) } func initGoogleMaps() { let camera = GMSCameraPosition.camera(withLatitude: 28.7041, longitude: 77.1025, zoom: 17.0) self.myMapView.camera = camera self.myMapView.delegate = self self.myMapView.isMyLocationEnabled = true } // MARK: CLLocation Manager Delegate func locationManager(_ manager: CLLocationManager, didFailWithError error: Error) { print("Error while getting location \(error)") } func locationManager(_ manager: CLLocationManager, didUpdateLocations locations: [CLLocation]) { locationManager.delegate = nil locationManager.stopUpdatingLocation() let location = locations.last let lat = (location?.coordinate.latitude)! let long = (location?.coordinate.longitude)! let camera = GMSCameraPosition.camera(withLatitude: lat, longitude: long, zoom: 17.0) self.myMapView.animate(to: camera) showPartyMarkers(lat: lat, long: long) } // MARK: GOOGLE MAP DELEGATE restaurantMarkerがタップされた時の動き func mapView(_ mapView: GMSMapView, didTap restaurantMarker: GMSMarker) -> Bool { guard let customMarkerView = restaurantMarker.iconView as? CustomMarkerView else { return false } let img = customMarkerView.img! let customMarker = CustomMarkerView(frame: CGRect(x: 0, y: 0, width: customMarkerWidth, height: customMarkerHeight), image: img, borderColor: UIColor.white, tag: customMarkerView.tag) restaurantMarker.iconView = customMarker return false } func mapView(_ mapView: GMSMapView, markerInfoContents restaurantMarker: GMSMarker) -> UIView? { guard let customMarkerView = restaurantMarker.iconView as? CustomMarkerView else { return nil } let data = previewDemoData[customMarkerView.tag] restaurantPreviewView.setData(title: data.title, img: data.img, price: data.price) return restaurantPreviewView } func mapView(_ mapView: GMSMapView, didTapInfoWindowOf restaurantMarker: GMSMarker) { guard let customMarkerView = restaurantMarker.iconView as? CustomMarkerView else { return } let tag = customMarkerView.tag restaurantTapped(tag: tag) } func mapView(_ mapView: GMSMapView, didCloseInfoWindowOf restaurantMarker: GMSMarker) { guard let customMarkerView = restaurantMarker.iconView as? CustomMarkerView else { return } let img = customMarkerView.img! let customMarker = CustomMarkerView(frame: CGRect(x: 0, y: 0, width: customMarkerWidth, height: customMarkerHeight), image: img, borderColor: UIColor.darkGray, tag: customMarkerView.tag) restaurantMarker.iconView = customMarker } //マーカーがどう動くか func showPartyMarkers(lat: Double, long: Double) { myMapView.clear() for restaurant in restaurants { let restaurantMarker=GMSMarker() let customMarker = CustomMarkerView(frame: CGRect(x: 0, y: 0, width: customMarkerWidth, height: customMarkerHeight), image: previewDemoData[0].img, borderColor: UIColor.darkGray, tag: 0) //[0]の部分をiにしてデータから順に引張てくるようにすれば良い restaurantMarker.position = CLLocationCoordinate2D(latitude: restaurant.lat, longitude: restaurant.long) restaurantMarker.iconView=customMarker restaurantMarker.map = self.myMapView } } @objc func btnMyLocationAction() { let location: CLLocation? = myMapView.myLocation if location != nil { myMapView.animate(toLocation: (location?.coordinate)!) } } @objc func restaurantTapped(tag: Int) { let v=DetailsVC() v.passedData = previewDemoData[tag] self.navigationController?.pushViewController(v, animated: true) } func setupTextField(textField: UITextField, img: UIImage){ textField.leftViewMode = UITextFieldViewMode.always let imageView = UIImageView(frame: CGRect(x: 5, y: 5, width: 20, height: 20)) imageView.image = img let paddingView = UIView(frame:CGRect(x: 0, y: 0, width: 30, height: 30)) paddingView.addSubview(imageView) textField.leftView = paddingView } func setupViews() { view.addSubview(myMapView) myMapView.topAnchor.constraint(equalTo: view.topAnchor).isActive=true myMapView.leftAnchor.constraint(equalTo: view.leftAnchor).isActive=true myMapView.rightAnchor.constraint(equalTo: view.rightAnchor).isActive=true myMapView.bottomAnchor.constraint(equalTo: view.bottomAnchor, constant: 60).isActive=true self.view.addSubview(txtFieldSearch) txtFieldSearch.topAnchor.constraint(equalTo: view.safeAreaLayoutGuide.topAnchor, constant: 10).isActive=true txtFieldSearch.leftAnchor.constraint(equalTo: view.leftAnchor, constant: 10).isActive=true txtFieldSearch.rightAnchor.constraint(equalTo: view.rightAnchor, constant: -10).isActive=true txtFieldSearch.heightAnchor.constraint(equalToConstant: 35).isActive=true setupTextField(textField: txtFieldSearch, img: #imageLiteral(resourceName: "map_Pin")) restaurantPreviewView=RestaurantPreviewView(frame: CGRect(x: 0, y: 0, width: self.view.frame.width, height: 190)) self.view.addSubview(btnMyLocation) btnMyLocation.bottomAnchor.constraint(equalTo: view.bottomAnchor, constant: -30).isActive=true btnMyLocation.leftAnchor.constraint(equalTo: view.leftAnchor, constant: 20).isActive=true btnMyLocation.widthAnchor.constraint(equalToConstant: 50).isActive=true btnMyLocation.heightAnchor.constraint(equalTo: btnMyLocation.widthAnchor).isActive=true } let myMapView: GMSMapView = { let v=GMSMapView() v.translatesAutoresizingMaskIntoConstraints=false return v }() let txtFieldSearch: UITextField = { let tf=UITextField() tf.borderStyle = .roundedRect tf.backgroundColor = .white tf.layer.borderColor = UIColor.darkGray.cgColor tf.placeholder="場所で検索" tf.translatesAutoresizingMaskIntoConstraints=false return tf }() let btnMyLocation: UIButton = { let btn=UIButton() btn.backgroundColor = UIColor.white btn.setImage(#imageLiteral(resourceName: "my_location"), for: .normal) btn.layer.cornerRadius = 5 btn.clipsToBounds=true btn.tintColor = UIColor.white btn.imageView?.tintColor=UIColor.white btn.addTarget(self, action: #selector(btnMyLocationAction), for: .touchUpInside) btn.translatesAutoresizingMaskIntoConstraints=false return btn }() var restaurantPreviewView: RestaurantPreviewView = { let v=RestaurantPreviewView() return v }() ```<issue_comment>username_1: The problem is that your markers are POI (points of interest) markers, not custom markers. You can either disable the infoViews on POI markers ([already answered on StackOverflow](https://stackoverflow.com/questions/7478069/disable-point-of-interest-information-window-using-google-maps-api-v3)) or you can add your own markers to the map which will not open infoViews by default. Custom markers use this delegate method that reference `marker` whereas you are referencing `restaurantMarker`: ``` func mapView(_ mapView: GMSMapView, didTap marker: GMSMarker) -> Bool { ... } ``` Upvotes: 0 <issue_comment>username_2: Whenever a `GMSMarker` is tapped, you get the callback in `GMSMapViewDelegate's` method: ``` func mapView(_ mapView: GMSMapView, didTap marker: GMSMarker) -> Bool ``` By default, when a `GMSMarker` is tapped, `infoWindow` for that marker pops up. In case you want something else to happen on marker tap, add it to the `delegate` method and **`return true`**. ``` func mapView(_ mapView: GMSMapView, didTap marker: GMSMarker) -> Bool { //Your custom logic to move to another VC return true } ``` Upvotes: 2 [selected_answer]
2018/03/18
444
1,433
<issue_start>username_0: I want to use the Use() function of the Gorilla Mux package, but I cannot get it to work. It says: `r.Use undefined (type *mux.Router has no field or method Use)`. I used almot the identitcal example from the documentation. My code looks like this. ``` package main import ( "net/http" "github.com/gorilla/mux" "fmt" ) func simpleMw(next http.Handler) http.Handler { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { fmt.Println(r.RequestURI) next.ServeHTTP(w, r) }) } func handler(w http.ResponseWriter, r *http.Request) { fmt.Fprintf(w, "hello") } func main() { r := mux.NewRouter() r.HandleFunc("/", handler) r.Use(simpleMw) http.Handle("/", r) http.ListenAndServe(":8000", nil) } ``` You can find the example of the documentation here: <http://www.gorillatoolkit.org/pkg/mux#overview>, search for "middleware". I know I could use [this](https://stackoverflow.com/questions/26204485/gorilla-mux-custom-middleware) method, but I would like to use the Gorilla package. Many thanks.<issue_comment>username_1: Thanks to <NAME>, I solved my problem. My package was outdated. I updated it with `go get -u github.com/gorilla/mux` and now it’s working. Many thanks to y’all! Upvotes: 3 [selected_answer]<issue_comment>username_2: error is nil in ListenAndServe: ``` http.ListenAndServe(":8000", r) ``` Upvotes: 0
2018/03/18
558
2,043
<issue_start>username_0: I am trying to connect the firestore db to my android application. I followed instructions but I can not create and instance of a Firestore db because "FirebaseFirestore" symbol can not resolved error pops up. Here is my dependencies at build gradle (app) file: ``` dependencies { implementation fileTree(dir: 'libs', include: ['*.jar']) implementation 'com.android.support:appcompat-v7:26.1.0' implementation 'com.android.support.constraint:constraint-layout:1.0.2' compile 'com.google.firebase:firebase-core:11.8.0' testImplementation 'junit:junit:4.12' androidTestImplementation 'com.android.support.test:runner:1.0.1' androidTestImplementation 'com.android.support.test.espresso:espresso-core:3.0.1' } ``` and I also added this line to end of the build gradle file : `apply plugin: 'com.google.gms.google-services'` and these are the dependencies at build gradle (project) file : ``` dependencies { classpath 'com.android.tools.build:gradle:3.0.1' classpath 'com.google.gms:google-services:3.2.0' // NOTE: Do not place your application dependencies here; they belong // in the individual module build.gradle files } ``` and google's maven repository is defined like this: ``` allprojects { repositories { google() jcenter() } ``` What am i missing ? I also added google-services.json file.<issue_comment>username_1: For Firebase database and its references use: ``` compile 'com.google.firebase:firebase-database:11.8.0' ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: Try this:- ``` dependencies { implementation 'com.google.firebase:firebase-database:11.8.0' // ... } ``` Follow the documentation [here](https://firebase.google.com/docs/android/setup) Upvotes: 1 <issue_comment>username_3: Just Add this gradle in Dependencies ``` dependencies { implementation 'com.google.firebase:firebase-firestore:15.0.0' // ... } ``` Upvotes: 0
2018/03/18
1,351
2,580
<issue_start>username_0: I have a strings like the one as follows: ``` -9.853418333333334 35.020405 0.0 0.0;-9.854273333333333 35.02038 0.0 0.0;-9.85452 35.01970166666667 0.0 0.0;-9.854205 35.019618333333334 0.0 0.0;-9.853418333333334 35.020405 0.0 0.0; ``` and I want use a regular expression to match the semicolon and the previous two numbers and spaces before each semicolon so I can replace them all with a comma and a space like this. ``` -9.853418333333334 35.020405, -9.854273333333333 35.02038, -9.85452 35.01970166666667, -9.854205 35.019618333333334, -9.853418333333334 35.020405, ``` If all the two numbers before the semicolon were always 0.0, I wouldn't bother to use a regular expression, but unfortunately, sometimes they are more complicated as follows: ``` -10.134578333333334 34.945479999999996 1433.7 2.5;-10.134636666666667 34.94678 1455.4 2.1;-10.133980000000001 34.946913333333335 1457.5 2.2;-10.133958333333334 34.945555 1434.8 2.1;-10.134578333333334 34.945479999999996 1433.7 2.5; ``` I tried this: ``` \s.+?\s.+?; ``` But it selects every number that has a space before it, not just the first two before the semicolon. Any suggestions?<issue_comment>username_1: Simply use a digit in place of any character (and remember to match a fraction!) ``` \s\d+\.\d+\s\d+\.\d+; ``` Upvotes: 1 [selected_answer]<issue_comment>username_2: I would suggest this regular expression: ``` \s+\d+(\.\d+)?\s+\d+(\.\d+)?; ``` This can be tested [here](http://pythex.org/?regex=%5Cs%2B%5Cd%2B(%5C.%5Cd%2B)%3F%5Cs%2B%5Cd%2B(%5C.%5Cd%2B)%3F%3B&test_string=-9.853418333333334%2035.020405%200.0%200.0%3B-9.854273333333333%2035.02038%200.0%200.0%3B-9.85452%2035.01970166666667%200.0%200.0%3B-9.854205%2035.019618333333334%200.0%200.0%3B-9.853418333333334%2035.020405%200.0%200.0%3B&ignorecase=0&multiline=0&dotall=0&verbose=0). This expression matches the following components: 1. \s+ multiple spaces 2. \d+(.\d+)? first double 3. \s+ multiple spaces 4. \d+(.\d+)? second double or integer Upvotes: 0 <issue_comment>username_3: **Regex**: [`(?: [0-9.]+){2};`](https://regex101.com/r/BHCvmH/1) (85 steps) Details: * `(?:)` Non capturing group * `' '` Space * `[0-9.]+` Match a single character present in the list between one and unlimited times * `{2}` Matches exactly 2 times Upvotes: 1 <issue_comment>username_1: (Sorry for posting multiple answers) I think we all were focused on proper matching of numbers, but it *may* appear that the task is actually just matching two non-space text fields followed by a semicolon: ``` (?:\s\S+){2}; ``` Upvotes: 0
2018/03/18
761
1,885
<issue_start>username_0: I'm building an art gallery with woocommerce. each product has a limited number of prints (for example: 100). I'm looking for a way of showing in the product title, the number of the current print based of the number of prints already sold, so for example if 20 prints where sold, the product name would be "NAME OF PRINT - PRINT NUMBER 21" So far I have been searching the web for a solution, but could not find the right way. Any help is appreciated.<issue_comment>username_1: Simply use a digit in place of any character (and remember to match a fraction!) ``` \s\d+\.\d+\s\d+\.\d+; ``` Upvotes: 1 [selected_answer]<issue_comment>username_2: I would suggest this regular expression: ``` \s+\d+(\.\d+)?\s+\d+(\.\d+)?; ``` This can be tested [here](http://pythex.org/?regex=%5Cs%2B%5Cd%2B(%5C.%5Cd%2B)%3F%5Cs%2B%5Cd%2B(%5C.%5Cd%2B)%3F%3B&test_string=-9.853418333333334%2035.020405%200.0%200.0%3B-9.854273333333333%2035.02038%200.0%200.0%3B-9.85452%2035.01970166666667%200.0%200.0%3B-9.854205%2035.019618333333334%200.0%200.0%3B-9.853418333333334%2035.020405%200.0%200.0%3B&ignorecase=0&multiline=0&dotall=0&verbose=0). This expression matches the following components: 1. \s+ multiple spaces 2. \d+(.\d+)? first double 3. \s+ multiple spaces 4. \d+(.\d+)? second double or integer Upvotes: 0 <issue_comment>username_3: **Regex**: [`(?: [0-9.]+){2};`](https://regex101.com/r/BHCvmH/1) (85 steps) Details: * `(?:)` Non capturing group * `' '` Space * `[0-9.]+` Match a single character present in the list between one and unlimited times * `{2}` Matches exactly 2 times Upvotes: 1 <issue_comment>username_1: (Sorry for posting multiple answers) I think we all were focused on proper matching of numbers, but it *may* appear that the task is actually just matching two non-space text fields followed by a semicolon: ``` (?:\s\S+){2}; ``` Upvotes: 0
2018/03/18
792
2,456
<issue_start>username_0: I have two variable. both have some value like. ```js var position = "340px"; var end_position = "4px"; var final = position + end_position; //result comes NAN // sometime it return 340px4px // i want 344px ``` how do i add these value to get desirable result. Help will be appreciated.<issue_comment>username_1: You can use `parseInt` (or `parseFloat` depends on number) and add both numbers ```js var position = "340px"; var end_position = "4px"; var final = ( parseInt(position) + parseInt(end_position) ) + "px"; console.log( final ) ``` Upvotes: 1 <issue_comment>username_2: You could use [`parseInt`](https://www.w3schools.com/jsref/jsref_parseint.asp) ``` var position = "340px"; var end_posiiton = "4px"; var final = parseInt(position, 10) + parseInt(end_position,10) + "px"; ``` Upvotes: 2 <issue_comment>username_3: You're concatenating strings rather than adding numbers, to avoid that, you can remove anything different than a number *(i.e: using regex)* and then execute the add operation. Further, you need to convert the operands as numbers. To convert the operands as numbers, you can use either the object **[`Number`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number)** or simply prefix them with `+`. Prefixing with `+` ```js var position = "340px"; var end_position = "4px"; var final = +position.replace(/[^\d]/g, '') + +end_position.replace(/[^\d]/g, '') + 'px'; console.log(final) ``` Using the object **[`Number`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number)** ```js var position = "340px"; var end_position = "4px"; var final = Number(position.replace(/[^\d]/g, '')) + Number(end_position.replace(/[^\d]/g, '')) + 'px'; console.log(final) ``` Upvotes: 1 <issue_comment>username_4: you need to convert the string to int, do the sum the convert them back to strings ```js var position = "340px"; var end_position = "4px"; var final = (parseInt(position) + parseInt(end_position)).toString()+"px"; console.log(final); //result comes NAN // sometime it return 340px4px // i want 344px ``` Upvotes: 1 <issue_comment>username_5: One more variation to do it using `Array.prototype.split`: ```js var position = "340px" var end_position = "4px" var final = +position.split('px')[0] + +end_position.split('px')[0] + "px" console.log(final) ``` Upvotes: 1 [selected_answer]
2018/03/18
1,240
4,740
<issue_start>username_0: I have a GUI made using javax.swing and java.awt, the request focus was working on keeping the text field in focus, so that a user can just start with a keyboard. I then added buttons for each integer 0-9, aswell as a clear field button. However the focus now always starts on a button. The focus still returns to the textField whenever I click a button or if I initiate the focus it remains in the textField, how can I fix this problem and have the focus on the text field everytime the window opens? Example Number Buttons ``` JButton btn0 = new JButton("0"); panel.add(btn0); btn0.setBounds(50, 360, 50, 50); btn0.setHorizontalAlignment(SwingConstants.CENTER); btn0.setForeground(Color.BLACK); btn0.setFont(new Font("Arial", Font.BOLD, 20)); btn0.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { String userIn = txtGuess.getText() + btn0.getText(); txtGuess.setText(userIn); } }); ``` textField Code ``` txtGuess = new JTextField(); txtGuess.setBounds(325, 220, 100, 35); panel.add(txtGuess); txtGuess.setFont(new Font("Arial", Font.BOLD, 25)); txtGuess.setHorizontalAlignment(SwingConstants.CENTER); txtGuess.setBackground(Color.decode("#206BA4")); txtGuess.setForeground(Color.decode("#EBF4FA")); txtGuess.setBorder(loweredBorder); txtGuess.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { checkGuess(); } }); ``` The end of checkGuess(); ``` finally { txtGuess.requestFocus(); //sets focus to the text box after checking guess txtGuess.selectAll(); //highlights all text in the text field so UX is improved if (attempt >= 10) { lblOutput.setText("You lose! Try Again?"); newGame(); } ```<issue_comment>username_1: ``` txtGuess.requestFocus(); ``` First of all you should not be using that method. Read the API for the method and it will tell you the better method to use. > > how can I fix this problem and have the focus on the text field every time the window opens? > > > Focus should go to the component at the top left of the frame by default. If this isn't happening, then you are doing something strange. If your text field is not the first component on the frame, then you can only set focus on it AFTER the GUI has been made visible. Based on the code posted it looks like the text field is located above the button, so it should get focus. Maybe the problem is that you are using a null layout and the order you add components to the frame. We can't tell without a proper MCVE. Other suggestions for your code: 1. Don't use a null layout and setBounds(). You should not be manually setting a size/ Swing was designed to be used with layout managers. 2. There is no need to create a unique ActionListener for every button. You can create a generic listener to be shared by every button. Check out: [How to add a shortcut key for a jbutton in java?](https://stackoverflow.com/questions/33739623/how-to-add-a-shortcut-key-for-a-jbutton-in-java/33739732#33739732) for a working example of this approach. > > I'm trying to figure out how to create the MCVE version to demonstrate the problem > > > Its not a big mystery. You stated you had a frame with a text field and it worked. Then you added a button and it didn't work. So The MCVE will simply consist of a frame with a text field and button. The game logic is irrelevant to your question so it is not needed. So the MCVE should be about 10 - 15 lines of code. Upvotes: 1 <issue_comment>username_2: The solution was to implement this code ``` frame.addWindowFocusListener(new WindowAdapter() { public void windowGainedFocus(WindowEvent e) { textfield.requestFocusInWindow(); } }); ``` Upvotes: 1 [selected_answer]<issue_comment>username_3: The focus goes by default to the first focusable Component in the layout. If you want to change this behaviour: * either follow the [accepted answer](https://stackoverflow.com/a/49363599/9064287) * or add an `AncestorListener`on the component you want to gain focus * or call `pack()` on the frame/dialog, request the focus `textfield.requestFocusInWindow();` and then show the frame/dialog `frame.setVisible(true);` See this [other question](https://stackoverflow.com/q/62821608/9064287) that has been answered on SO, as well as [this article](https://tips4java.wordpress.com/2010/03/14/dialog-focus/) that gives you the full answer. Upvotes: 0
2018/03/18
1,846
6,919
<issue_start>username_0: I have a AST of elementary math arithmetic expressions: ``` data Expr = Constant Int | Variable String | Add Expr Expr | Multiply Expr Expr deriving (Show) ``` I also have a really simple function which simplify given expression: ``` simplify :: Expr -> Expr simplify (Add (Constant 0) e) = simplify e simplify (Add e (Constant 0)) = simplify e simplify (Add (Constant a) (Constant b)) = Constant (a + b) simplify (Add e1 e2) = Add (simplify e1) (simplify e2) simplify (Multiply (Constant 0) _) = Constant 0 simplify (Multiply _ (Constant 0)) = Constant 0 simplify (Multiply (Constant 1) e) = e simplify (Multiply e (Constant 1)) = e simplify (Multiply (Constant a) (Constant b)) = Constant (a * b) simplify (Multiply e1 e2) = Multiply (simplify e1) (simplify e2) simplify e = e ``` Unfortunately, this function is not very effective, because it simplify expression from root to leafs (from top to bottom). Consider this expression: ``` exampleExpr :: Expr exampleExpr = Add (Multiply (Constant 1) (Variable "redrum")) (Multiply (Constant 0) (Constant 451)) ``` It cost two function calls (`simplify (simplify exampleExpr)`) to reduce this expression into `Variable "redrum"`. With bottom up approach, it should cost only one function call. I'm not experienced enough yet to be able to write this code effectively. So my question is: **how to rewrite this function to simplify given expression from leafs to root (bottom to top)?**<issue_comment>username_1: Firstly, you're missing a couple of recursive calls. In these lines: ``` simplify (Multiply (Constant 1) e) = e simplify (Multiply e (Constant 1)) = e ``` You should replace the right-hand side with `simplify e`. ``` simplify (Multiply (Constant 1) e) = simplify e simplify (Multiply e (Constant 1)) = simplify e ``` Now to rewrite the expression from the bottom up. The problem is that you're looking for simplification patterns on the left-hand side of your equation, ie, before you simplify the children. You need to simplify the children first, and then look for the pattern. ``` simplify :: Expr -> Expr simplify (Add x y) = case (simplify x, simplify y) of (Constant 0, e) -> e (e, Constant 0) -> e (Constant a, Constant b) -> Constant (a + b) (x1, y1) -> Add x1 y1 simplify (Multiply x y) = case (simplify x, simplify y) of (Constant 0, _) -> Constant 0 (_, Constant 0) -> Constant 0 (Constant 1, e) -> e (e, Constant 1) -> e (Constant a, Constant b) -> Constant (a * b) (x1, y1) -> Multiply x1 y1 simplify e = e ``` On the left-hand side of the equation, we find the children of the current node. On the right, we look for patterns in the simplified children. One way of improving this code is to separate the two responsibilities of finding-and-replacing children and of matching simplification patterns. Here's a general function to recursively replace every subtree of an `Expr`: ``` transform :: (Expr -> Expr) -> Expr -> Expr transform f (Add x y) = f $ Add (transform f x) (transform f y) transform f (Multiply x y) = f $ Multiply (transform f x) (transform f y) transform f e = f e ``` `transform` takes a (non-recursive) transformation function which calculates a replacement for a single-node pattern, and recursively applies it to every node in the tree in a bottom-up manner. To write a transformation function, you just look for the interesting patterns and forget about recursively rewriting the children. ``` simplify = transform f where f (Add (Constant 0) e) = e f (Add e (Constant 0)) = e f (Add (Constant a) (Constant b)) = Constant (a + b) f (Multiply (Constant 0) _) = Constant 0 f (Multiply _ (Constant 0)) = Constant 0 f (Multiply (Constant 1) e) = e f (Multiply e (Constant 1)) = e f (Multiply (Constant a) (Constant b)) = Constant (a * b) f e = e ``` Since `f`'s argument has already had its children rewritten by `transform`, we don't need to exhaustively match every possible pattern or explicitly recurse through the value. We look for the ones we care about, and nodes which don't need transforming fall through to the catch-all `f e = e` case. Generic programming libraries like [`lens`](https://hackage.haskell.org/package/lens)'s [`Plated` module](https://hackage.haskell.org/package/lens-4.15.4/docs/Control-Lens-Plated.html) take programming patterns like `transform` and make them universal. You (or the compiler) write a small amount of code characterising the shape of your datatype, and the library implements recursive higher-order functions like `transform` once and for all. Upvotes: 4 [selected_answer]<issue_comment>username_2: Simplifying expression ASTs is a typical application for the recursion scheme called *catamorphism*. Here is an example with the [recursion-schemes](http://hackage.haskell.org/package/recursion-schemes) library from <NAME>: ``` {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE DeriveFunctor #-} {-# LANGUAGE DeriveFoldable #-} {-# LANGUAGE DeriveTraversable #-} {-# LANGUAGE TemplateHaskell #-} module CataExprSimplify where import Data.Functor.Foldable import Data.Functor.Foldable.TH data Expr = Constant Int | Variable String | Add Expr Expr | Multiply Expr Expr deriving (Show) -- | Generate the base functor makeBaseFunctor ''Expr simplify :: Expr -> Expr simplify = cata $ algSimplAdd . project . algSimplMult -- | Simplify Addition simplZero :: Expr -> Expr simplZero = cata algSimplAdd algSimplAdd :: ExprF Expr -> Expr algSimplAdd (AddF (Constant 0) r) = r algSimplAdd (AddF l (Constant 0)) = l algSimplAdd (AddF (Constant l) (Constant r)) = Constant (l + r) algSimplAdd x = embed x -- | Simplify Multiplication simplMult :: Expr -> Expr simplMult = cata algSimplMult algSimplMult :: ExprF Expr -> Expr algSimplMult (MultiplyF (Constant 1) r) = r algSimplMult (MultiplyF l (Constant 1)) = l algSimplMult (MultiplyF (Constant 0) _) = Constant 0 algSimplMult (MultiplyF _ (Constant 0)) = Constant 0 algSimplMult (MultiplyF (Constant l) (Constant r)) = Constant (l * r) algSimplMult x = embed x ``` It has the following advantages over code that uses direct recursion calls: * Recursion is abstracted out to the cata function and not intertwinned with your simplification logic. * You will not forget to call simplify on subexpressions. * Catamorphisms work from bottom to top. * Simplification of Addition and Multiplication can be written in separate functions. * it's much easier to maintain your code if you have to extend your AST (eg adding new constructors) If you want to read more about recursion schemes read this [blog post series](http://blog.sumtypeofway.com/an-introduction-to-recursion-schemes/) Upvotes: 1
2018/03/18
470
1,561
<issue_start>username_0: When performing a $lookup on my schemas, it always return an empty array. What am I doing wrong? **Result Collection** ``` const resultSchema = new mongoose.Schema({ trial: { type: mongoose.Schema.Types.ObjectId, ref: 'Trial', required: true } }); ``` --- **Trial Collection** ``` const trialSchema = new mongoose.Schema({ name: { type: String, required: true } }); ``` --- **Aggregate** ``` Result.aggregate([ { $lookup: { from: 'trial', localField: 'trial', foreignField: '_id', as: 'x' } } ]) .exec() .then(results => ({ results })) ``` "x" ends up being always an empty array in the end.<issue_comment>username_1: check collection name. In aggregate function 'trial' starts lowercase, in results collection 'Trial' starts uppercase Upvotes: 0 <issue_comment>username_2: Ok just found the answer right here: <https://stackoverflow.com/a/45481516/3415561> The "from" field in lookup must be your collection name and not the model name. Therefore it's a plural word. Here it's ``` from: 'trials' ``` instead of ``` from: 'trial' ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: In my case I had mistakenly put a $ in front of the collection name: ``` {$lookup: { from: '$users', // <- incorrect foreignField: '_id', localField: 'userid', as: 'users' }}, ``` it needed to be ``` {$lookup: { from: 'users', // <- correct foreignField: '_id', localField: 'userid', as: 'users' }}, ``` Upvotes: 1
2018/03/18
766
2,606
<issue_start>username_0: Pretty new to Optional/Java8 usage and I had a doubt regarding its usage. I have a function in my API which returns a String which can be null or empty. Its similar to finding the id in email like: <EMAIL> -> abc is o/p. Now one way of doing this was: ``` public Optional getUser(final String emailId) { if (TextUtils.isEmpty(emailId)) { return Optional.empty(); } String userIDSeparator = "@"; int userNameEndIndex = emailId.indexOf(userIDSeparator); if (userNameEndIndex == -1) { return Optional.empty(); } return Optional.of(emailId.substring(0, userNameEndIndex)); } ``` I was wondering if there is any neater way of doing this to return an Optional? Also, Optional was introduced in Java8 so is there anyway the code can be java7 compatible? I am aware of preprocessors in C not sure if something similar is available. Note: I have not compiled this code, so there might be some errors. I wanted the input on Optional. Thanks!<issue_comment>username_1: Maybe you mean something like this : ``` public Optional getUser(final String emailId) { return Optional.ofNullable(emailId) .filter(email -> email.contains("@")) .map(email -> Optional.of(email.replaceAll("(.\*?)@.\*", "$1"))) .orElseGet(Optional::empty); } ``` --- **Example** ``` null -> Optional.empty "" -> Optional.empty "<EMAIL>" -> abd ``` --- As [@Aominè](https://stackoverflow.com/questions/49348154/how-can-i-use-optional-to-return-empty-if-there-is-any-exception-in-the-code-in/49348518?noredirect=1#comment85697077_49348518) mention, there are some unnecessary parts in my solution, you can use this version instead : ``` public Optional getUser(final String emailId) { return Optional.ofNullable(emailId) .filter(email -> email.contains("@")) .map(email -> email.replaceAll("(.\*?)@.\*", "$1")); } ``` Upvotes: 3 <issue_comment>username_2: Well, the code can certainly be reduced. i.e. ``` public Optional getUser(final String emailId) { return Optional.of(emailId) .filter(email -> email.contains("@")) .map(email -> email.substring(0, email.indexOf("@"))); } ``` if this method can ever receive a `null` value then you'd need to change `Optional.of(emailId)` to `Optional.ofNullable(emailId)`. As for: > > Also, Optional was introduced in Java8 so is there any way the code can > be java7 compatible? > > > Not that I know of. There may be other libraries that have similar functionality to Java's Optional type, so a little bit of research online may get you to the right place. Upvotes: 4 [selected_answer]
2018/03/18
1,091
3,592
<issue_start>username_0: I have two tags, the first one is to select the brand and the second one is to select the product. My question/problem is that I want to tell php to echo inside the second select tag. The `echo`ed option needs to have the same `brand_name` as the selected option from the first tag. What I need is that if I choose Adidas as the brand, php would show every product that has the `brand_name` Adidas. I have two different database tables, the first one called `brand`: ``` brand_id | brand_name 1 | ADIDAS 2 | NIKE ``` The second one called `product`: ``` product_id | brand_name | product_name | product_image | amount | sell | buy 1 | ADIDAS | T-Shirt | none | 50 | 30 | 28 2 | NIKE | Shoes | none | 20 | 130 | 120 ``` Here is my code: ``` ----SELECT---- php require\_once 'connections/dbc.php'; $query = "SELECT `brand\_name` FROM `brands`"; $response = @mysqli\_query($conn, $query); if (!$response) { $\_SESSION['errortitle'] ='Error loading the brands'; $\_SESSION['errormessage'] = 'Something wrong happend while loading the brands.<br/Please contact The developer'; header("location: error.php"); exit(); } else { while($row = mysqli\_fetch\_array($response)){ echo ''.$row['brand\_name'].''; } } ?> ----SELECT---- php require\_once 'connections/dbc.php'; ###########HERE WHERE I NEED MY CODE############## $query = "SELECT `product\_name` FROM `product WHERE brand\_name = "##### The selected option from the first select tag" "; $response = @mysqli\_query($conn, $query); if (!$response) { $\_SESSION['errortitle'] = "Error loading the prouducts"; $\_SESSION['errormessage'] = 'Something wrong happend while loading the proudcts.<br/Please contact The developer'; header("location: error.php"); exit(); } else { while($row = mysqli\_fetch\_array($response)){ echo ''.$row['product\_name'].''; } } ?> ```<issue_comment>username_1: Maybe you mean something like this : ``` public Optional getUser(final String emailId) { return Optional.ofNullable(emailId) .filter(email -> email.contains("@")) .map(email -> Optional.of(email.replaceAll("(.\*?)@.\*", "$1"))) .orElseGet(Optional::empty); } ``` --- **Example** ``` null -> Optional.empty "" -> Optional.empty "<EMAIL>" -> abd ``` --- As [@Aominè](https://stackoverflow.com/questions/49348154/how-can-i-use-optional-to-return-empty-if-there-is-any-exception-in-the-code-in/49348518?noredirect=1#comment85697077_49348518) mention, there are some unnecessary parts in my solution, you can use this version instead : ``` public Optional getUser(final String emailId) { return Optional.ofNullable(emailId) .filter(email -> email.contains("@")) .map(email -> email.replaceAll("(.\*?)@.\*", "$1")); } ``` Upvotes: 3 <issue_comment>username_2: Well, the code can certainly be reduced. i.e. ``` public Optional getUser(final String emailId) { return Optional.of(emailId) .filter(email -> email.contains("@")) .map(email -> email.substring(0, email.indexOf("@"))); } ``` if this method can ever receive a `null` value then you'd need to change `Optional.of(emailId)` to `Optional.ofNullable(emailId)`. As for: > > Also, Optional was introduced in Java8 so is there any way the code can > be java7 compatible? > > > Not that I know of. There may be other libraries that have similar functionality to Java's Optional type, so a little bit of research online may get you to the right place. Upvotes: 4 [selected_answer]
2018/03/18
592
1,925
<issue_start>username_0: This is a reliable way to get the host url in both SSR and CSR: ``` let host = context.req ? context.req.headers.host : window.location.origin ``` But where to put this so all compoments can access the host? It should be loaded once per page and work with both SSR and CSR.<issue_comment>username_1: Maybe you mean something like this : ``` public Optional getUser(final String emailId) { return Optional.ofNullable(emailId) .filter(email -> email.contains("@")) .map(email -> Optional.of(email.replaceAll("(.\*?)@.\*", "$1"))) .orElseGet(Optional::empty); } ``` --- **Example** ``` null -> Optional.empty "" -> Optional.empty "<EMAIL>" -> abd ``` --- As [@Aominè](https://stackoverflow.com/questions/49348154/how-can-i-use-optional-to-return-empty-if-there-is-any-exception-in-the-code-in/49348518?noredirect=1#comment85697077_49348518) mention, there are some unnecessary parts in my solution, you can use this version instead : ``` public Optional getUser(final String emailId) { return Optional.ofNullable(emailId) .filter(email -> email.contains("@")) .map(email -> email.replaceAll("(.\*?)@.\*", "$1")); } ``` Upvotes: 3 <issue_comment>username_2: Well, the code can certainly be reduced. i.e. ``` public Optional getUser(final String emailId) { return Optional.of(emailId) .filter(email -> email.contains("@")) .map(email -> email.substring(0, email.indexOf("@"))); } ``` if this method can ever receive a `null` value then you'd need to change `Optional.of(emailId)` to `Optional.ofNullable(emailId)`. As for: > > Also, Optional was introduced in Java8 so is there any way the code can > be java7 compatible? > > > Not that I know of. There may be other libraries that have similar functionality to Java's Optional type, so a little bit of research online may get you to the right place. Upvotes: 4 [selected_answer]
2018/03/18
877
3,516
<issue_start>username_0: I've just encountered this approach for the first time: using methods which return self-references rather than overloading the constructor. Consider Selenium's [FluentWait](https://seleniumhq.github.io/selenium/docs/api/java/org/openqa/selenium/support/ui/FluentWait.html)'s sample usage: ``` // Waiting 30 seconds for an element to be present on the page, checking // for its presence once every 5 seconds. Wait wait = new FluentWait(driver) .withTimeout(30, SECONDS) .pollingEvery(5, SECONDS) .ignoring(NoSuchElementException.class); ``` The methods `withTimeout`, `pollingEvery`, and `ignoring` each return self references. This seems like a way to bypass having to create an inordinate amount of overloaded constructors. For instance, you would need 23=8 separate overloaded constructor definitions to allow initializing an instance of `FluentWait` with omission or inclusion of the 3 input arguments for timeout, polling, and ignoring. In general, a class whose constructor can have *n* possible input arguments will need to have *2n* overloaded constructor definitions (if each possible input combination is desired). Using methods seems like an abridged way to get the same functionality. Why should I not do this all the time? Which scenarios is one approach better than the other?<issue_comment>username_1: Overloading constructors and other methods is just a nice way in which you can make use easier for yourself and other developers. Picture that you have an `add()` method. If you want to accept both `2` and `"2"` in Java, you will want to overload your constructor in order to accept both a `String` and `int`. Like so: ``` public add(String number) { ... } public add(int number) { ... } ``` Upvotes: 0 <issue_comment>username_2: Constructor allows to make fields of the instance `final` which is important for creating immutable object. You also can add checks into the constructor which will validate invariants of your fields. To combine flexibility you like in this approach and benefits of using constructor you can use builder pattern. Upvotes: 1 <issue_comment>username_3: This is quite close to [Builder pattern](https://en.wikipedia.org/wiki/Builder_pattern), please check the overview there for when the pattern is applicable or not. In this case you say the `FluentWait` returns self-references. It makes the object mutable. This can be problematic if you have multi-threaded or parallel executions. Upvotes: 2 [selected_answer]<issue_comment>username_4: > > This seems like a way to bypass having to create an inordinate > amount of overloaded constructors. > > > In fact no it doesn't here as each invoked method returns the current object. The [javadoc](https://seleniumhq.github.io/selenium/docs/api/java/org/openqa/selenium/support/ui/FluentWait.html) states clearly that. The object is created here : `new FluentWait(driver)` These are fluent invocations. It avoids prefixing the variable at each time you need to set something in the object. Instead of writing : ``` Wait wait = new FluentWait(driver); wait.setTimeout(30, SECONDS) wait.pollingEvery(5, SECONDS) wait.ignoring(NoSuchElementException.class); ``` You can write : ``` Wait wait = new FluentWait(driver) .withTimeout(30, SECONDS) .pollingEvery(5, SECONDS) .ignoring(NoSuchElementException.class); ``` There is also fluent builder that is a way to instantiate an object in a fluent and thread-safe way but that is another matter. Upvotes: 0
2018/03/18
606
2,019
<issue_start>username_0: This question pertains to C in Visual Studio (Community 2015) on Windows 10. I can't seem to create an array with a const size - the below code causes an "expected constant expression" error that prevents build. It's a wchar\_t array here with a size\_t size, but I see the same behavior for char & other array types & int constants. I know the size of arrays needs to be known at compile time, but surely that's the case here. What gives? ``` #include void main() { size\_t const newsize = 100; wchar\_t fileData[newsize]; } ```<issue_comment>username_1: > > I know the size of arrays needs to be known at compile time, but surely that's the case here. > > > That's actually not the case. In C, `const` qualifying doesn't result in a "constant expression". So `newsize` isn't a [constant expression](http://en.cppreference.com/w/c/language/constant_expression) (unlike C++). Your code is valid in C99 and in C11, if [variable length array](https://en.wikipedia.org/wiki/Variable-length_array) (VLA) is supported by your implementation (VLAs are optional in C11). However, it seems Visual studio doesn't seem to support VLAs and expects a "constant expression" for array size as in C89. So you may have to use dynamic memory allocation (malloc & friends), or simply specify the `100` as size, or use a macro for defining size and so on. Upvotes: 3 [selected_answer]<issue_comment>username_2: In C, a const-qualified type is not the same as a constant literal. You have a couple of options in MSVC. First, you can hardcode `100`: ``` #define SIZE 100 int main(void) // main returns int etc etc { wchar_t fileData[SIZE]; } ``` Or, you can create the array using dynamic memory allocation: ``` #include int main(void) { const size\_t newsize = 100; wchar\_t \*fileData; fileData = calloc(sizeof filedata, newsize); } ``` Note that you *can* do `const size_t newsize = 100; wchar_t fileData[newsize];` in C99, but MSVC still doesn't support that fully. Upvotes: 1
2018/03/18
923
3,392
<issue_start>username_0: It is not working. I tried everything. Nothing works. `state.params` is simply not there if you make an advanced app. I have this problem with the react "navigation". It says in the manual that the params object should be there <https://reactnavigation.org/docs/navigation-prop.html#state-the-screen-s-current-state-route> But it isn't. I set an `id` parameter like this in screen 1 when I link to screen 2: ``` this.props.navigation.navigate('FontsTab', { id: item.id }) } style={styles.listHeader} > this.\_selectedItem(item.text)} > {item.title} {item.value} ``` But it's not working. In screen 2 I can't use the `state.params` : ``` { JSON.stringify(this.props.navigation)} TEST{ state.params } {this.state.dataSource.text} ``` `state.params` just returns nothing. What can I do about it? The full class for screen2: ``` class Fonts extends Component { constructor(props) { super(props); this.state = { params: null, selectedIndex: 0, value: 0.5, dataSource: null, isLoading: true }; this.componentDidMount = this.componentDidMount.bind(this); } getNavigationParams() { return this.props.navigation.state.params || {} } componentDidMount(){ return fetch('http://www.koolbusiness.com/newvi/4580715507220480.json') .then((response) => response.json()) .then((responseJson) => { this.setState({ ...this.state, isLoading: false, dataSource: responseJson, }, function(){ }); }) .catch((error) =>{ console.error(error); }); } render() { if(this.state.isLoading){ return( ) } return ( { JSON.stringify(this.props)} TEST{ this.state.params } {this.state.dataSource.text} ); } } ``` In my app this pain is reproducible by a simple button in screen1: `navigate('FontsTab', { name: 'Brent' })} title="Go to Brent's profile" />` Then switching to the FontsTab works but the params are not in the state object: [![enter image description here](https://i.stack.imgur.com/8230B.png)](https://i.stack.imgur.com/8230B.png) I also have this code for the tabview ``` import React, { Component } from 'react'; import { View, Text, StyleSheet } from 'react-native'; import { StackNavigator } from 'react-navigation'; import { Icon } from 'react-native-elements'; import FontsHome from '../views/fonts_home'; import FontsDetails from '../views/fonts_detail'; const FontsTabView = ({ navigation }) => ( ); const FontsDetailTabView = ({ navigation }) => ( ); const FontsTab = StackNavigator({ Home: { screen: FontsTabView, path: '/', navigationOptions: ({ navigation }) => ({ title: '', headerLeft: ( navigation.navigate('DrawerOpen')} /> ), }), }, Detail: { screen: FontsDetailTabView, path: 'fonts\_detail', navigationOptions: { title: 'Fonts Detail', }, }, }); export default FontsTab; ```<issue_comment>username_1: `this.props.navigation.state.params.id` will give you the value of param id passed from screen1. Upvotes: 1 <issue_comment>username_2: I put my screen in the right StackNavigator and then it worked. Upvotes: 0
2018/03/18
621
2,281
<issue_start>username_0: I am new to React and sorry for my English. When I pass data from Child to parent component and click on another button it doesn't render. as you can see in my comment setState creates an infinite loop stopping the option to update the value. I have tried different ways to solve the problem creating inside constructor variables like `this.name = ''` and `this.selected = []`, and setting on callback as `this.name = name` and `this.selected = selected` but doesn't render when I click in other options. I've tried work with `componentDidMount()` and other methods of life cycle but I don't know how to solve it. **App.js (Parent component)** ``` state = { name: '', selected: [] }; doParentControl = (name, selected) => { console.log('doParentControl name: ',name); console.log('doParentControl selected: ',selected); // this.setState({ name: name, selected: selected }) --> infinite loop } render() { console.log('-> render App') return ( Map === ); } ``` **Navbar.js (Child component)** ``` constructor(props) { super(props); this.selected = [false, false, false]; this.state = { name: 'tile' }; } handleClick = (e) => { this.setState({name: e.target.value}); } handleChange = (e) => { switch(e.target.value) { case 'option1': { this.setState({ selected: this.selected[0] = e.target.checked}) break; } case 'option2': { this.setState({ selected: this.selected[1] = e.target.checked}) break; } case 'option3': { this.setState({ selected: this.selected[2] = e.target.checked}) break; } default: break; } } doParentControlFromChild = () => { this.props.parentControl(this.state.name, this.selected); } render() { console.log('render Navbar ->') return ( Tile Tile WaterColor Cartografía Satélite Mapa Option 1 Option 2 Option 3 { this.doParentControlFromChild() } ); } ```<issue_comment>username_1: `this.props.navigation.state.params.id` will give you the value of param id passed from screen1. Upvotes: 1 <issue_comment>username_2: I put my screen in the right StackNavigator and then it worked. Upvotes: 0
2018/03/18
316
1,192
<issue_start>username_0: It this possible? If so, I can't seem to locate any documentation around it.<issue_comment>username_1: You can do that to a limited amount by using a `THREE.ShadowMaterial` for the objects receiving the shadow. You might need to have a separate renderpass for shadow-rendering though. See here for an example: <https://codepen.io/usefulthink/pen/JrZOPw> Upvotes: 2 <issue_comment>username_2: Yes, alter the color of your shadow plane's material. I needed a slightly blue-ish tint to my shadows so used something like this: ``` var material = new THREE.ShadowMaterial({opacity: .7, color: '#003'}); ``` Upvotes: 0 <issue_comment>username_3: Adding ambient light can lighten the color of the shadows. It illuminates all the objects in the scene including the shadows. ``` // Create an ambient light, and set its intensity const ambientLight = new THREE.AmbientLight(0xFFFFFF); ambientLight.intensity = 0.5;// max=1 ``` ... ``` // Add the ambient light to the scene scene.add(ambientLight); ``` ... Try experimenting with different values to get the desired effects. You may need to adjust the intensity of your light source to get the best result. Upvotes: 0
2018/03/18
1,681
4,888
<issue_start>username_0: In my code, its supposed to be a magic 8 ball that is rigged. Pressing A should show "Yes!", pressing B should show "No." But every time, it shows "Yes!" without any buttons being pressed. ``` from microbit import * import random frameq = Image("00000:""00000:""00900:""00000:""00000") framew = Image("00000:""09990:""09390:""09990:""00000") framee = Image("99999:""93339:""93339:""93339:""99999") framer = Image("33333:""33333:""33333:""33333:""33333") framet = Image("22222:""22222:""22222:""22222:""22222") framey = Image("11111:""11111:""11111:""11111:""11111") frames = [frameq, framew, framee, framer, framet, framey] answers = [ "It is certain", "It is decidedly so", "Without a doubt", "Yes, definitely", "You may rely on it", "As I see it, yes", "Most likely", "Outlook good", "Yes", "Signs point to yes", "Reply hazy try again", "Ask again later", "Better not tell you now", "Cannot predict now", "Concentrate and ask again", "Don't count on it" "My reply is no", "My sources say no", "Outlook not so good", "Very doubtful", ] apress = False bpress = False while True: if button_a.is_pressed: bpress = False apress = True elif button_b.is_pressed: apress = False bpress = True display.show("8") if accelerometer.was_gesture("shake") and apress is True: display.clear() display.show(frames, loop=False, delay=250) sleep(1000) display.show("Yes!") apress = False elif accelerometer.was_gesture("shake") and bpress is True: display.clear() display.show(frames, loop=False, delay=250) sleep(1000) display.show("No.") bpress = False elif accelerometer.was_gesture("shake"): display.clear() display.show(frames, loop=False, delay=250) sleep(1000) display.scroll(random.choice(answers)) ``` It shows no error message, it just shows "Yes!" every single time I shake. And by the way, "Yes!" is not in the answers array, only "Yes", and I always see the !.<issue_comment>username_1: Without any more context, one can only assume what the problem is. Make sure that `is_pressed` is not a function: ``` if button_a.is_pressed: bpress = False apress = True elif button_b.is_pressed: apress = False bpress = True ``` if `is_pressed` is a function then `button_a.is_pressed` will *always* be `True` hence `apress` will always be `True` hence you will always get `'Yes!'` printed. Try to change the above code to ``` if button_a.is_pressed(): bpress = False apress = True elif button_b.is_pressed(): apress = False bpress = True ``` Otherwise, **debug** you program. Add print statements in different execution paths and see what causes each `if` statement to be `True`. Upvotes: 2 [selected_answer]<issue_comment>username_2: Thank you username_1! After some touch ups, this is my final code. I'm sorry I didn't make this clear, but it was a magic 8 ball that could be rigged. by pressing A or B. Here is the final code: ``` from microbit import * import random frameq = Image("00000:""00000:""00900:""00000:""00000") framew = Image("00000:""09990:""09390:""09990:""00000") framee = Image("99999:""93339:""93339:""93339:""99999") framer = Image("33333:""33333:""33333:""33333:""33333") framet = Image("22222:""22222:""22222:""22222:""22222") framey = Image("11111:""11111:""11111:""11111:""11111") frames = [frameq, framew, framee, framer, framet, framey] answers = [ "It is certain", "It is decidedly so", "Without a doubt", "Yes, definitely", "You may rely on it", "As I see it, yes", "Most likely", "Outlook good", "Yes", "Signs point to yes", "Reply hazy try again", "Ask again later", "Better not tell you now", "Cannot predict now", "Concentrate and ask again", "Don't count on it" "My reply is no", "My sources say no", "Outlook not so good", "Very doubtful", ] apress = False bpress = False while True: if button_a.is_pressed(): bpress = False apress = True elif button_b.is_pressed(): apress = False bpress = True display.show("8") if accelerometer.was_gesture("shake"): if apress: display.clear() display.show(frames, loop=False, delay=250) sleep(1000) display.show("Yes!") apress = False elif bpress: display.clear() display.show(frames, loop=False, delay=250) sleep(1000) display.show("No.") bpress = False else: display.clear() display.show(frames, loop=False, delay=250) sleep(1000) display.scroll(random.choice(answers)) ``` Upvotes: 0
2018/03/18
513
2,240
<issue_start>username_0: One of the advantage of Github Search v4 (GraphQL) over v3 is that it can selectively pick the fields that we want, instead of always getting them all. However, the problem I'm facing now is how to get certain fields. I tried the online help but it is more convolution to me than helpful. Till now, I'm still unable to find the fields for size, score and open issues for the returned repository(ies). That's why I'm wondering if there is a way to get them all, like `Select *` in SQL. Thx.<issue_comment>username_1: Short Answer: No, by design. GraphQL was designed to have the client explicitly define the data required, leading to one of the primary benefits of GraphQL, which is preventing over fetching. Technically you can use [GraphQL fragments](http://graphql.org/learn/queries/#fragments) somewhere in your application for every field type, but if you don't know which fields you are trying to get it wouldn't help you. Upvotes: 3 <issue_comment>username_2: GraphQL requires that when requesting a field that you also request a selection set for that field (one or more fields belonging to that field's type), unless the field resolves to a scalar like a string or number. That means unfortunately there is no syntax for "get all available fields" -- you always have to specify the fields you want the server to return. Outside of perusing the docs, there's two additional ways you can get a better picture of the fields that are available. One is the [GraphQL API Explorer](https://developer.github.com/v4/explorer/), which lets you try out queries in real time. It's just a GraphiQL interface, which means when you're composing the query, you can trigger the autocomplete feature by pressing `Shift`+`Space` or `Alt`+`Space` to see a list of available fields. If you want to look up the fields for a specific type, you can also just ask GraphQL :) ``` query{ __type(name:"Repository") { fields { name description type { kind name description } args { name description type { kind name description } defaultValue } } } } ``` Upvotes: 5 [selected_answer]
2018/03/18
436
1,940
<issue_start>username_0: I have a static website on html and css, i add Fancybox code and works fine but recently update my website too HTTPS and Fancybox stop working. Any idea what caused this problem? Thanks<issue_comment>username_1: Short Answer: No, by design. GraphQL was designed to have the client explicitly define the data required, leading to one of the primary benefits of GraphQL, which is preventing over fetching. Technically you can use [GraphQL fragments](http://graphql.org/learn/queries/#fragments) somewhere in your application for every field type, but if you don't know which fields you are trying to get it wouldn't help you. Upvotes: 3 <issue_comment>username_2: GraphQL requires that when requesting a field that you also request a selection set for that field (one or more fields belonging to that field's type), unless the field resolves to a scalar like a string or number. That means unfortunately there is no syntax for "get all available fields" -- you always have to specify the fields you want the server to return. Outside of perusing the docs, there's two additional ways you can get a better picture of the fields that are available. One is the [GraphQL API Explorer](https://developer.github.com/v4/explorer/), which lets you try out queries in real time. It's just a GraphiQL interface, which means when you're composing the query, you can trigger the autocomplete feature by pressing `Shift`+`Space` or `Alt`+`Space` to see a list of available fields. If you want to look up the fields for a specific type, you can also just ask GraphQL :) ``` query{ __type(name:"Repository") { fields { name description type { kind name description } args { name description type { kind name description } defaultValue } } } } ``` Upvotes: 5 [selected_answer]
2018/03/18
563
2,407
<issue_start>username_0: I have a class which is to be extended by various sub classes. Each sub class doesn't need to reimplement the class methods, but only the declaration to infer proper type check. As an example: ``` class QueryBuilder { insert(data: T): string { return ''; } } class User extends QueryBuilder { insert(data: IUser): string; } interface IUser { name: string, email: string } ``` This way i get the following error: ``` Function implementation is missing or not immediately following the declaration. (method) User.insert(data: IUser): string ``` Is there a way to get what i want without having to repeat the implementation for every sub class?<issue_comment>username_1: Short Answer: No, by design. GraphQL was designed to have the client explicitly define the data required, leading to one of the primary benefits of GraphQL, which is preventing over fetching. Technically you can use [GraphQL fragments](http://graphql.org/learn/queries/#fragments) somewhere in your application for every field type, but if you don't know which fields you are trying to get it wouldn't help you. Upvotes: 3 <issue_comment>username_2: GraphQL requires that when requesting a field that you also request a selection set for that field (one or more fields belonging to that field's type), unless the field resolves to a scalar like a string or number. That means unfortunately there is no syntax for "get all available fields" -- you always have to specify the fields you want the server to return. Outside of perusing the docs, there's two additional ways you can get a better picture of the fields that are available. One is the [GraphQL API Explorer](https://developer.github.com/v4/explorer/), which lets you try out queries in real time. It's just a GraphiQL interface, which means when you're composing the query, you can trigger the autocomplete feature by pressing `Shift`+`Space` or `Alt`+`Space` to see a list of available fields. If you want to look up the fields for a specific type, you can also just ask GraphQL :) ``` query{ __type(name:"Repository") { fields { name description type { kind name description } args { name description type { kind name description } defaultValue } } } } ``` Upvotes: 5 [selected_answer]
2018/03/18
2,588
8,654
<issue_start>username_0: I have two files which are combined under 600 bytes (.6kb) as below. So how is it that my app.bundle.js is so large (987kb) and more importantly how does one manage the size of it? src file index.js ``` import _ from 'lodash'; import printMe from './print.js'; function component() { var element = document.createElement('div'); var btn = document.createElement('button'); // Lodash, now imported by this script element.innerHTML = _.join(['Hello', 'webpack'], ' '); btn.innerHTML = 'click and check console'; btn.onclick = printMe; element.appendChild(btn); return element; } document.body.appendChild(component()); ``` src file print.js ``` export default function printMe() { consoe.log('Called from print.js'); } ``` webpack.config.js ``` const path = require('path'); const HtmlWebpackPlugin = require('html-webpack-plugin'); const CleanWebpackPlugin = require('clean-webpack-plugin'); module.exports = { entry: { app: './src/index.js', print:'./src/print.js' }, devtool: 'inline-source-map', plugins: [ new CleanWebpackPlugin(['dist']), new HtmlWebpackPlugin({ title: 'Output Management' }) ], output: { filename: '[name].bundle.js', path: path.resolve(__dirname, 'dist') } }; ``` package.json ``` { "name": "my-webpack-4-proj", "version": "1.0.0", "description": "", "main": "index.js", "mode": "development", "scripts": { "dev": "webpack --mode development", "build": "webpack --mode production", "watch": "webpack --watch", "test": "echo \"Error: no test specified\" && exit 1" }, "keywords": [], "author": "", "license": "ISC", "devDependencies": { "clean-webpack-plugin": "^0.1.19", "css-loader": "^0.28.11", "csv-loader": "^2.1.1", "file-loader": "^1.1.11", "html-webpack-plugin": "^3.0.6", "style-loader": "^0.20.3", "webpack": "^4.1.1", "webpack-cli": "^2.0.12", "xml-loader": "^1.2.1" }, "dependencies": { "express": "^4.16.3", "lowdash": "^1.2.0" } } ``` Warning message: > > WARNING in asset size limit: The following asset(s) exceed the > recommended size limit (244 KiB). This can impact web performance. > Assets: app.bundle.js (964 KiB) > > ><issue_comment>username_1: This happens because webpack is bundling all your code dependencies. And as you are using lodash, so lodash minified version will be added to your source code. Plus you are including the source maps: ``` devtool: 'inline-source-map', ``` While this should be fine for debug, there is no reason to include your source maps in a Prod build. So some things that you can do to reduce your bundle size. 1. Make sure to set properly the mode: flag inside your webpack config. You can put either mode: 'development', or mode: 'production'. This will hint webpack about what kind of build you are doing so it will give you the proper warnings. 2. Make sure to not include source maps on your prod build 3. Avoid overusing external dependencies that you don't need and make. Sometimes even these things will not bring your bundle size to below 244kb, what you can do in these cases is to split your bundle and start to use logical chunks. First of all, you can easily separate your js from your styesheets by using [the mini css extract plugin](https://github.com/webpack-contrib/mini-css-extract-plugin). Another technique that you can use are dynamic imports. > > Dynamic Imports: Split code via inline function calls within modules > > > This will allow you to break your code logically into modules tied to the screens so only the required libraries will be loaded. For more info about dynamic imports, you can check the official documentation. <https://webpack.js.org/guides/code-splitting/> Upvotes: 7 [selected_answer]<issue_comment>username_2: set mode flag to development/production in `webpack.config.js`. Example: ``` var mode = process.env.NODE_ENV || 'development'; module.exports = { ... devtool: (mode === 'development') ? 'inline-source-map' : false, mode: mode, ... } ``` Upvotes: 4 <issue_comment>username_3: I had the same problem. My bundle file was (1.15 MiB). In my webpack.config.js, replacing : ``` devtool: 'inline-source-map' ``` by this line: ``` devtool: false, ``` takes my bundle file size from **1.15** **MiB** to **275 Kib**. Or create 2 separate webpack config files. One for dev and one for prod. In the prod webpack config file, delete the `devtool` option. Upvotes: 4 <issue_comment>username_4: Simply use below code in webpack.config.js : ``` performance: { hints: false, maxEntrypointSize: 512000, maxAssetSize: 512000 } ``` or follow You can create multiple config file for (development, production). In dev config file use devtool or others necessary dev configuration and vice versa . you have to use webpack-merge package and config package.json scripts code like ``` "scripts": { "test": "echo \"Error: no test specified\" && exit 1", "start": "webpack --open --config webpack.dev.js", "dev": "webpack-dev-server --mode development --open", "build": "webpack --config webpack.prod.js" }, ``` For example : create a file webpack.common.js ``` // webpack.common.js use your common configuration like entry, output, module, plugins, ``` Create webpack.dev.js ``` // webpack.dev.js const merge = require('webpack-merge'); const common = require('./webpack.common.js'); module.exports = merge(common, { mode: 'development', devtool: 'inline-source-map', devServer: { contentBase: './dist' } }); ``` Create webpack.prod.js ``` const merge = require('webpack-merge'); const common = require('./webpack.common.js'); module.exports = merge(common, { mode: 'production', performance: { hints: false, maxEntrypointSize: 512000, maxAssetSize: 512000 } }); ``` Upvotes: 7 <issue_comment>username_5: The main problem is `devtool: 'inline-source-map'` in webpack.common.js file in your case, in my case I use in development `'cheap-module-eval-source-map'`, this devtool is very nice for development mode but make my bundle.js to 4.2MiB, so, in production mode in webpack.prod.js file I change devtool like this: ```js module.exports = merge(common, { mode: 'production', performance: { hints: false }, devtool: 'source-map' }); ``` and now from 4.2mb I've got 732KiB of bundle.js. This devtool will slow down process of bundling for few more seconds but will remarkably reduce the size of the file, in my example from 4.18 MiB to 732 KiB. Upvotes: 2 <issue_comment>username_6: Add the below line in your vue.config.js file. It is because of the size. We need to split into chunks. ``` configureWebpack:{ performance: { hints: false }, optimization: { splitChunks: { minSize: 10000, maxSize: 250000, } } } ``` This may help somebody. Upvotes: 4 <issue_comment>username_7: Nuxt solution `nuxt.config.js`: ``` module.exports = { build: { extend(config, { isDev , isClient }) { if (isClient && !isDev) { config.optimization.splitChunks.maxSize = 250000 } } } } ``` Upvotes: 2 <issue_comment>username_8: For me, this was solved as said by others eliminating `devtool` in production The code of `webpack.config.js` ```js module.exports = (env, argv) => { const mode = argv.mode || 'development' const config = { entry: './src/index.js', output: { path: `${__dirname}/lib`, filename: 'index.js', library: 'my-library', libraryTarget: 'umd', }, module: { rules: [ { test: /\.(js|jsx)$/, exclude: /node_modules/, use: ['babel-loader'], }, ], }, devtool: mode === 'development' ? 'cheap-module-eval-source-map' : false, } return config } ``` Upvotes: 1 <issue_comment>username_9: For me, code-splitting and adding 'CompressionPlugin' reduced the webpack size form 1.51 MiB to 350 KiB ```js new CompressionPlugin({ filename: "[path][base].br", algorithm: "brotliCompress", test: /\.(js|jsx|css|html|svg|png|jpg|jpeg)$/, compressionOptions: { params: { [zlib.constants.BROTLI_PARAM_QUALITY]: 11, }, }, threshold: 10240, minRatio: 0.8, deleteOriginalAssets: true, }) ``` install the zlib library also Upvotes: 1 <issue_comment>username_10: Add this to your `webpack.config.js`. It should work. ```js performance: { hints: false, maxEntrypointSize: 512000, maxAssetSize: 512000 }, ``` Upvotes: 0
2018/03/18
1,337
4,495
<issue_start>username_0: I've created some templates in Django, and I want to translate them to work in ReactJS. The thing that I'm struggling with the most is the regroup function. There's a number of ways I've thought of approaching it, but I think the easiest is probably to do it within the component. All I've managed to do is map the items which generates the entire list, but I need to have the items dynamically grouped before iterating each group. In React I'd like to be able to apply a command like `items.groupBy('start_time').groupBy('event_name').map(item =>` The output should be 'start\_time', 'event\_name', and then the rest of the data within each event group. Each 'start\_time' will contain multiple events. I'd like to keep the code as concise as possible. This is the Django template: ``` {% if event_list %} {% regroup event\_list by start\_time as start\_time\_list %} {% for start\_time in start\_time\_list %} ###### {{ start\_time.grouper|date:' d-m-Y H:i' }} {% regroup start\_time.list by event as events\_list\_by\_start\_time %} {% for event\_name in events\_list\_by\_start\_time %} ##### [{{ event\_name.grouper|title }}](#collapse-{{ event_name.grouper|slugify }}) {% for item in event\_name.list %} # continue iterating the items in the list ``` This is the render method from the React component: ``` render() { const { error, isLoaded, items, groups } = this.state; if (error) { return Error: {error.message}; } else if (!isLoaded) { return Loading...; } else { return ( {items.map(item => ( #### {item.event} ) )} ); } } } ```<issue_comment>username_1: If you are setting up a React front end, it would be more sensible not to mix django templates with the react front end components. Instead set up a Django/DRF backend api that will feed your react components with JSON data. Upvotes: 2 <issue_comment>username_2: To translate this template from django to react, you just have to reimplement the regroup template tag as a javascript function. For pretty much any django template tag, you can easily find some javascript library that does the same. This is not included in React.js, but instead you can import utilities from libraries such as underscore or moment.js etc. This is the sample django template code from the example in the [documentation for the `{% regroup %}` template tag.](https://docs.djangoproject.com/en/2.0/ref/templates/builtins/#regroup) ``` {% regroup cities by country as country_list %} {% for country in country\_list %} * {{ country.grouper }} {% for city in country.list %} + {{ city.name }}: {{ city.population }} {% endfor %} {% endfor %} ``` Here's how you could do it with react.js ```js // React ( or javascript ) doesn't come with a groupby function built in. // But we can write our own. You'll also find this kind of stuff in lots // of javascript toolsets, such as lowdash, Ramda.js etc. const groupBy = (key, data) => { const groups = {} data.forEach(entry => { const {[key]: groupkey, ...props} = entry const group = groups[groupkey] = groups[groupkey] || [] group.push(props) }) return groups } // I'll define a dumb component function for each nested level of the list. // You can also write a big render function with everything included, // but I find this much more readable – and reusable. const City = ({ name, population }) => ( - {name}: {population} ) const Country = ({ country, cities }) => ( - {country} {cities.map(props => )} ) const CityList = ({ cities }) => { const groups = Object.entries(groupBy("country", cities)) return ( {groups.map(([country, cities]) => ( ))} ) } // We'll use the exact same data from the django docs example. const data = [ { name: "Mumbai", population: "19,000,000", country: "India" }, { name: "Calcutta", population: "15,000,000", country: "India" }, { name: "<NAME>", population: "20,000,000", country: "USA" }, { name: "Chicago", population: "7,000,000", country: "USA" }, { name: "Tokyo", population: "33,000,000", country: "Japan" } ] ReactDOM.render(, document.getElementById("app")) ``` ```html ``` If you run the snippet above, you should get the exact same output as what you find in the original django example. ```html * India + Mumbai: 19,000,000 + Calcutta: 15,000,000 * USA + New York: 20,000,000 + Chicago: 7,000,000 * Japan + Tokyo: 33,000,000 ``` Upvotes: 1
2018/03/18
1,842
6,130
<issue_start>username_0: I'm trying to play a sound by looping an array and split an array into each array, and then using switch case to detect what's in the array. function keeper() { ``` number2 = get.num; sNumber = number2.toString(); output = []; for ( i = 0, len = sNumber.length; i < len; i ++) { output.push(+sNumber.charAt(i)); console.log(output); switch (output[i]){ case 0: console.log('0'); audio0 = new Audio('logo/Q0.wav'); audio0.play(); break; case 1: console.log('1'); audio1 = new Audio('logo/Q1.wav'); audio1.play(); break; case 2: console.log('2'); audio2 = new Audio('logo/Q2.wav'); audio2.play(); break; case 3: console.log('3'); audio3 = new Audio('logo/Q3.wav'); audio3.play(); break; case 4: console.log('4'); audio4 = new Audio('logo/Q4.wav'); audio4.play(); break; case 5: console.log('5'); audio5 = new Audio('logo/Q5.wav'); audio5.play(); break; } }} ``` The function it works just fine, but apparently the sound thats played out it too quick. is there any solution to fix this?<issue_comment>username_1: I'm assuming you want to hear the sounds after each other? That doesn't work like this. Lets say the first number in the array is: 0. So sound 0 gets played. But, since you loop through the array, and you reach the next number, eg. 2: sound 2 gets played immediately after. The loop doesn't wait for the first sound the finish before starting the next play(). what you could do is modify the loop to wait for the audio ended event. for example: ``` var audio0 = document.getElementById("myAudio"); audio0.onended = function() { alert("The audio has ended"); }; ``` Upvotes: 1 <issue_comment>username_2: Try to use a timer: ``` for (var i = 1; i <= 3; i++) { (function(index) { setTimeout(function() { alert(index); }, i * 1000); })(i); } ``` Use the **setTimeout** fuction like that Upvotes: 0 <issue_comment>username_3: Try using an audio sprite. I'm sure there's an app or whatever to do certain tasks programmatically but be aware step 1 and 2 are done manually. 1. Take a group of audio files and either use [Audacity](https://www.audacityteam.org/download/) or an [online service](https://audio-joiner.com/) to join them into one file. 2. Next, get the start times of each clip of the audio file and store them in an array. 3. The following Demo will take the file and array, generate the HTML layout, create a button for each clip that corresponds to the array parameter. So when a button is clicked it will play only a clip of the audio sprite (the audio file). 4. The audio sprite in this Demo was not edited very well, I just made it to demonstrate how everything works. The timing relies on the timeupdate event which checks the playing time about every 250ms give or take. So if you want to make a more accurate start and end times, try leaving a gap of 250ms between clips. **Details commented in Demo** Demo ---- ```js // Store path to audio file in a variable var xFile = 'https://storage04.dropshots.com/photos7000/photos/1381926/20180318/175955.mp4' // Store cues of each start time of each clip in an array var xMap = [0, 1.266, 2.664, 3.409, 4.259,4.682, 5.311, 7.169, 7.777, 9.575, 10.88,11.883,13.64, 15.883, 16.75, 17, 17.58]; /* Register doc to act when the DOM is ready but before the images || are fully loaded. When that occurs, call loadAudio() */ document.addEventListener('DOMContentLoaded', function(e) { loadAudio(e, xFile, xMap); }); /* Pass the Event Object, file, and array through || Make a Template Literal of the HTML layout and the hidden || tag. Interpolate the ${file} into the tag. || Insert the TL into the and parse it into HTML. == Call generateBtn() function... \*/ function loadAudio(e, file, map) { var template = ` Sound FX Test Panel `; document.body.insertAdjacentHTML('beforeend', template); generateBtn(e, map); } /\* Pass the Event Object and the array through || Reference fieldset.set || create a documentFragment in order to speedup appending || map() the array... || create a || Set btn class to .btn || Set button.btn data-idx to the corresponding index value of || map array. || Set button.btn text to its index number. || Add button.btn to the documentFragment... || return an array of .btn (not used in this demo) == Call the eventBinder() function... \*/ function generateBtn(e, map) { var set = document.querySelector('.set'); var frag = document.createDocumentFragment(); map.map(function(mark, index, map) { var btn = document.createElement('button'); btn.className = 'btn'; btn.dataset.idx = map[index]; btn.textContent = index; frag.appendChild(btn); return btn; }); set.appendChild(frag); eventBinder(e, set, map); } /\* Pass EventObject, fieldset.set, and map array through || Reference the tag. || Register fieldset.set to the click event || if the clicked node (e.target) class is .btn... || Determine the start and end time of the audio clip. == Call playClip() function \*/ function eventBinder(e, set, map) { var sFX = document.getElementById('sndFX'); set.addEventListener('click', function(e) { if (e.target.className === 'btn') { var cue = parseFloat(e.target.textContent); var start = parseFloat(e.target.dataset.idx); if (cue !== (map.length - 1)) { var end = parseFloat(e.target.nextElementSibling.dataset.idx); } else { var end = parseFloat(sFX.duration); } playClip.call(this, sFX, start, end); } else { return false; } }); } /\* Pass the reference to the tag, start and end of clip || pause audio || Set the currentTime to the start parameter || Listen for timeupdate event... || should currentTime meet or exceed the end parameter... || pause tag. \*/ function playClip(sFX, start, end) { sFX.pause(); sFX.currentTime = start; sFX.play(); sFX.ontimeupdate = function() { if (sFX.currentTime >= end) { sFX.pause(); } } return false; } ``` Upvotes: 0
2018/03/18
635
2,604
<issue_start>username_0: I want to know when, or how often should a Logging class flush its stream - and why? Should it flush after every log, or wait for the stream buffer to fill and then flush, or is it somewhere in between? To further illustrate my question: My logging class holds a reference to some `std::ostream stream` and uses `operator<<` for input. I am able to flush after every line of input, let's say by using `std::endl;` - but i don't, instead i `use stream << "\n"` to just force a newline. I let flush happen when the stream buffer is full or on stream /logger destruction.<issue_comment>username_1: As often as possible, so that no entries are lost upon a crash, and also to have better real-time information while the program is running. As rarely as possible to avoid overhead of writing to a file. These requirements conflict, so you need to find a balance that fits your needs. You might want to consider that some log messages may be less important than others, and you may use this to affect your flushing policy. You can use signal handlers (or similar OS specific techniques) to flush the buffer right before a crash. Note that flushing a `std::stream` is technically something you're not allowed to do in a signal handler (although that might depend in its implementation), but it *might* work and that's usually better than just crashing. Upvotes: 4 [selected_answer]<issue_comment>username_2: In C++ the iostreams library has 2 classes for logging. Both are bound to `stderr`. `cerr` - flushes after each input is formatted, which means after every call. `clog` - flushes when buffer is filled or called manually (example: `endl` or `flush()`). The thinking is that, you want errors to be output ASAP. But logging can be output in batches. Upvotes: 2 <issue_comment>username_3: **It depends.** Though logging is often narrowed down to denote some kinds of error-related messaging mechanism, it is not always tied with errors. In contrast, sometimes it is even explicitly separated with errors (e.g. `clog` vs. `cerr`). Another example is [*tracing*](https://en.wikipedia.org/wiki/Tracing_(software)) in general. So it is up to you, the designer of the logging system, to decide which integrity level you want to achieve with specific logging. Stream-based logging can be a simple way to report errors. It is not the feature of last sorts, but it is unwise to be depended as the *only* way to record *fatal error*s. Just better than nothing (particularly in cases where portability is emphasized)... when you *have to* rely on this way, flush it ASAP. Upvotes: 1
2018/03/18
750
2,741
<issue_start>username_0: A user can add some text to a textarea. Every 60 seconds the text get saved into the database. I need a message that show "Text is saved" for 3 seconds and hide again. The interval should be 60 seconds. The message should be display above the textarea but only when there is text in the textarea. I tried: ``` function autoSaveEntry() { if($('#txtarea').val().length>0){ $('#message').append('Text is saved'); setTimeout(function () { $('#message').fadeOut(function(){}); }, 3000); } setTimeout(autoSaveEntry, 60000) } ``` HTML ``` ```<issue_comment>username_1: There were quite a few problems, many of them simple typos: * Your 'txtarea' idea was spelled 'txtara' * The message ID had a # in it in the HTML * You never showed the message box after fading it out * There was nothing to trigger the function running in the first place (it was only ever called by itself). * your function continually appended the same text onto #message, instead of replacing it Here's a corrected version (times reduced significantly for demo, otherwise this is similar to your existing code): ```js function autoSaveEntry() { console.log($('#txtarea').val()) if ($('#txtarea').val().length > 0) { $('#message').text('Text is saved').show(); setTimeout(function() { $('#message').fadeOut(); }, 1000); } } setInterval(autoSaveEntry, 5000); ``` ```html ``` That's not a very good way to implement of what you're trying to do, though; it'll cause the message to constantly appear and fade again as long as the field contains text. Instead of running on an interval, and constantly saving the same value to the database every minute, consider running on change events. And instead of just checking to see if the field is empty, check to see if its contents have changed (and therefore need saving): ```js $('#txtarea').on('change', function() { if ($(this).val() !== $(this).data("lastval")) { // here you would save the data, and ideally wrap the following code in a promise resolve from that ajax call. $('#message').show(); setTimeout(function() { $('#message').fadeOut(); }, 1000); } $(this).data("lastval",$(this).val()); // stash the current value for next time }); ``` ```css #message { display: none } ``` ```html Text is saved ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: The corrected code is shown below: HTML: ``` ``` script: ``` setInterval(autoSaveEntry, 60000); function autoSaveEntry() { if($('#txtarea').val().length>0){ $('#message').append('Text is saved'); setTimeout(function () { $('#message').fadeOut(function(){}); }, 3000); } } ``` Upvotes: 0
2018/03/18
704
2,554
<issue_start>username_0: Say I have a form named `myform` with an input field named `myinput`. By default, symfony will generate HTML widget like: ``` ``` This doesn't work with my current javascripts and I need to get the following instead: ``` ``` I've searched quite a bit and found 2 ways to achieve this: * return null in `getBlockPrefix()` method in form type class. * create the form using `FormFactoryInterface::createNamed()` and pass `null` as name. It seems like the first method is not recommended, since it would limit the ability to customize form rendering using prefixed blocks. The 2nd method was recommended [here](https://stackoverflow.com/questions/8416783/symfony2-form-component-creating-fields-without-the-forms-name-in-the-name-att) and [here](https://stackoverflow.com/questions/13384056/symfony2-1-using-form-with-method-get/13474522#13474522). However both methods will change the name of the form to null and as a result symfony will generate the form like: ``` ``` This is probably because `form_div_layout.html.twig` looks like: **Which doesn't validate as HTML5.** According to [this page](https://github.com/symfony/symfony/issues/19818), "this is not a bug". I know I can override the `form_start` block in the template and remove the name altogether, but it seems that the form wasn't designed to be used with null names in general (hence no check for name length in the template). So my question is: What is the recommended and HTML5 compatible way to remove input name prefixes for symfony forms?<issue_comment>username_1: * Regarding PHP, I think either option is OK. * Regarding Twig, I'd go for creating a custom form [theme](https://symfony.com/doc/current/form/form_customization.html#what-are-form-themes "Theming Symfony Forms") and, optionally, [apply it to all forms](https://symfony.com/doc/current/form/form_customization.html#making-application-wide-customizations "Application-wide Customizations") on the site. I think that's the *Symfony way* of rendering a form according to your, specific, needs (nullable form name). Upvotes: 0 <issue_comment>username_2: It was a bug in form rendering. I've submitted a [pull request](https://github.com/symfony/symfony/pull/26584) to symfony repo which was accepted. Until the change is released, a temporary solution would be to add this code to your form theme: ``` {# fix HTML5 validation of forms with null names #} {% block form_start %} {% set name = name|default(block_prefixes.1) %} {{ parent() }} {% endblock %} ``` Upvotes: 1
2018/03/18
652
2,367
<issue_start>username_0: I would like to protect some endpoints in my express app, I want to create something simple to manage if my app became a big app...now I'm doing something like this: ``` setProtected(router) { const self = this; router.use(this.auth); ... } setPublic(router) { const self = this; ... } getRouter() { const router = express.Router(); this.setPublic(router); this.setProtected(router); return router; } ``` with: ``` auth(req, res, next) { if(req.isAuthenticated()) { console.log('req.isAuthenticated()', req.isAuthenticated()); return next(); } return res.send(401); } ``` the problem in this case is that is difficult maintain and it doesn't work well as if I have /:id in my publicRoute and for example /my-items in my protected route when I'm not logged and I try to reach /my-items I get the code of /:id. Another idea was to create a json with the list of all my urls with same information like protected/not protected and eventual roles and then change auth with something like: ``` import urls from './urls'; auth(req, res, next) { if (urls[req.url] == 'public') { return next() } else if (urls[req.url] == 'protected' && req.isAuthenticated()) { return next(); } return res.send(401); } ``` whats the best way for you?<issue_comment>username_1: * Regarding PHP, I think either option is OK. * Regarding Twig, I'd go for creating a custom form [theme](https://symfony.com/doc/current/form/form_customization.html#what-are-form-themes "Theming Symfony Forms") and, optionally, [apply it to all forms](https://symfony.com/doc/current/form/form_customization.html#making-application-wide-customizations "Application-wide Customizations") on the site. I think that's the *Symfony way* of rendering a form according to your, specific, needs (nullable form name). Upvotes: 0 <issue_comment>username_2: It was a bug in form rendering. I've submitted a [pull request](https://github.com/symfony/symfony/pull/26584) to symfony repo which was accepted. Until the change is released, a temporary solution would be to add this code to your form theme: ``` {# fix HTML5 validation of forms with null names #} {% block form_start %} {% set name = name|default(block_prefixes.1) %} {{ parent() }} {% endblock %} ``` Upvotes: 1
2018/03/18
3,912
12,303
<issue_start>username_0: I'm try to install modules on Windows 10 using npm 5.6.0. When I enter npm install I get: ``` gyp ERR! configure error gyp ERR! stack Error: `gyp` failed with exit code: 1 gyp ERR! stack at ChildProcess.onCpExit (C:\Users\xiaooming\Desktop\app\node_modules\node-gyp\lib\configure.js:336:16) gyp ERR! stack at emitTwo (events.js:126:13) gyp ERR! stack at ChildProcess.emit (events.js:214:7) gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:198:12) gyp ERR! System Windows_NT 10.0.16299 gyp ERR! command "D:\\Program Files\\nodejs\\node.exe" "C:\\Users\\xiaooming\\Desktop\\app\\node_modules\\node-gyp\\bin\\node-gyp.js" "rebuild" "--verbose" "--libsass_ext=" "--libsass_cflags=" "--libsass_ldflags=" "--libsass_library=" gyp ERR! cwd C:\Users\xiaooming\Desktop\app\node_modules\node-sass gyp ERR! node -v v8.10.0 gyp ERR! node-gyp -v v3.6.2 gyp ERR! not ok Build failed with error code: 1 npm WARN kingt4.0app@1.0.0 No repository field. npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@1.1.3 (node_modules\fsevents): npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents@1.1.3: wanted {"os":"darwin","arch":"any"} (current: {"os":"win32","arch":"x64"}) npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! node-sass@3.13.1 postinstall: `node scripts/build.js` npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the node-sass@3.13.1 postinstall script. npm ERR! This is probably not a problem with npm. There is likely additional logging output above. npm ERR! A complete log of this run can be found in: npm ERR! C:\Users\xiaooming\AppData\Roaming\npm-cache\_logs\2018-03-18T13_02_03_946Z-debug.log ``` It seems like `node-sass` install error. The environment is below: Python version: 2.7.14 npm version: 5.6.0 node version: 8.10.0 ruby version: 2.3.3p222 (2016-11-21 revision 56859) [i386-mingw32] system: win10(x64) node-gyp: 3.6.2 And I have installed Microsoft Visual Studio express 2015, the below command has been executed: ``` npm config set msvs_version 2015 node-gyp configure --msvs_version=2015 npm install --global --production windows-build-tools ``` package.json ------------ ``` { "name": "kingt4.0app", "version": "1.0.0", "description": "an kingt4.0 project.", "main": "index.js", "scripts": { "test": "test.js" }, "keywords": [ "finance" ], "author": "kingdom", "license": "ISC", "devDependencies": { "browser-sync": "^2.11.1", "gulp": "^3.9.1", "gulp-autoprefixer": "^3.1.0", "gulp-cache": "^0.4.2", "gulp-changed": "^1.3.2", "gulp-clean-css": "^2.0.2", "gulp-file-include": "^0.13.7", "gulp-if": "^2.0.0", "gulp-imagemin": "^2.4.0", "gulp-rename": "^1.2.2", "gulp-sass": "^2.2.0", "gulp-sourcemaps": "^1.6.0", "gulp.spritesmith": "^6.2.1", "imagemin-jpegtran": "^4.3.2", "imagemin-pngquant": "^4.2.2", "merge-stream": "^1.0.0", "normalize.css": "^4.0.0", "spritesheet-templates": "^10.1.2", "vinyl-buffer": "^1.0.0" }, "dependencies": { "remodal": "^1.0.7", "slick-carousel": "^1.6.0" } } ``` How to resolve this problem? thanks for help.<issue_comment>username_1: Delete your `$HOME/.node-gyp` directory and try again: ``` rm -R ~/.node-gyp ``` Upvotes: 3 <issue_comment>username_2: I think delete this directory is better: ``` rm -rf ~/.node-gyp/ rm -r node_modules/.bin/; rm -r build/ npm install ``` Upvotes: 2 <issue_comment>username_3: As described here: <https://github.com/nodejs/node-gyp/issues/1634> ``` You need to install postgresql-server-dev-X.Y for building a server-side extension or libpq-dev for building a client-side application. gyp: Call to '/usr/bin/pg_config --libdir' returned exit status 1 while in binding.gyp. while trying to load binding.gyp ``` In my case the problem solved by installing `libpq-dev` on my Ubuntu ``` $ sudo apt install libpq-dev ``` then try to removing the `node_modules` and do `npm install` again. **Note** Please take a note that this is not a problem with the node modules, the problem is on the my machine, it doesn't meet the criteria of `libpq` Upvotes: 0 <issue_comment>username_4: You may need to install Windows build tools. Check it out this [ref](https://github.com/felixrieseberg/windows-build-tools). Upvotes: 1 <issue_comment>username_5: this works for me. run command below one by one ``` rm -rf node_modules rm package-lock.json npm update npm install ``` Upvotes: 6 <issue_comment>username_6: I've faced the same issue on my `OSX 10.15` with `brew` as repository; for me was enough doing: ``` brew unlink python@2 brew link python ``` Upvotes: 0 <issue_comment>username_7: If this is a mac machine (OSX) here is what you can do use terminal ``` xcode-select --print-path ``` then remove installed version ``` sudo rm -r -f /Library/Developer/CommandLineTools ``` and reinstall ``` xcode-select --install ``` that should fix the problem Ref: [gyp: No Xcode or CLT version detected macOS Catalina](https://anansewaa.com/gyp-no-xcode-or-clt-version-detected-macos-catalina/) Upvotes: 7 <issue_comment>username_8: Try this command below and see if that wiped away this issue. I have catalina on my mac and this command wiped away that problem. ``` npm i -g node-gyp@latest && npm config set node_gyp "/usr/local/lib/node_modules/node-gyp/bin/node-gyp.js" ``` Upvotes: 4 <issue_comment>username_9: For MAC ONLY Step: 1 ``` sudo rm -r -f /Library/Developer/CommandLineTools ``` Step: 2 ``` sudo xcode-select --switch /Library/Developer/CommandLineTools/ ``` Upvotes: -1 <issue_comment>username_10: if you remove your package-lock.json and then run ``` npm update npm install ``` Upvotes: 2 <issue_comment>username_11: I had the same issue while installing a project which uses the webpack. And i did follow the steps about xcode-select reinstallation but it gave me the same error. For this I did a simple fix, I installed node-sass separately as it was the one giving error. ``` npm i node-sass ``` Then ran the installation again using, ``` npm i ``` And it worked for me✌️ Sometimes the node packages have older versions which needs to be upgraded as well for a project. For this you get vulnerability related warnings while installation. for this run the following command(it is also mentioned in terminal output), ``` npm audit fix ``` Upvotes: 2 <issue_comment>username_12: I faced similar issue where my MSVS\_version was 2019.I tried following steps and issue was resolved. ``` npm install --global --production windows-build-tools ``` Then ``` npm config set msvs_version 2017 npm i ``` Upvotes: 2 <issue_comment>username_13: If you watch some sort of old tutorial and want to install *webworker-threads* from NPM, you should know that it's a part of Node JS. ``` const { Worker } = require('worker_threads'); ``` You can see more about differences here <https://nodejs.org/api/worker_threads.html> Upvotes: -1 <issue_comment>username_14: This error is due to obsolete packages. It can also be due to a version change of NodeJS, npm or other packages and/or package managers. To solve it you must remove `node_modules` and execute the command: ``` npm i --ignore-script ``` It's possible that `node-sass` gives you an error because it is already obsolete, so install npm sass. In case it's another package, just check the current version and update it. Upvotes: 4 <issue_comment>username_15: I had to update to dart-sass, was still using node-sass which is deprecated. <https://github.com/sass/dart-sass#from-npm> Upvotes: 0 <issue_comment>username_16: ``` $ npm i --ignore-script ``` I tried different ways to solve this issue, finally, this comment simply help to install node packages without any errors. Upvotes: 4 <issue_comment>username_17: I was facing similar issue while doing npm install for one of existing vuejs app. Initially it complained for missing python. after installing python, i started getting this error - gyp ERR! stack Error: `gyp` failed with exit code: 1 Later I realized that the project also had package-lock.json. I removed it and did npm install. It worked!!! So if you are facing similar issue, and also have package-lock.json then try removing it. Upvotes: 0 <issue_comment>username_18: I was stuck with this issue for quite long on `macOS`. For me, on macOS, I had to 1. Install Python 2.7.x while Python 3 was already installed 2. Ensure Python 2 path was included in $PATH 3. Switch CLI. It didn't work in iTerm2, but worked fine on standard macOS terminal I did install `xcode` as well, but that didn't have an effect Upvotes: 0 <issue_comment>username_19: I got the same `gyp ERR!` error on Ubuntu 22 when I was installing [node-heapdump](https://www.npmjs.com/package/heapdump). To fix it, I run the following command: ``` sudo apt install node-gyp ``` Upvotes: 0 <issue_comment>username_20: I fixed this issue by changing the node version. I use nvm to handle multiple node versions. I was using node v14.x and npm v6.x and I had to change node to 16.x and npm 8.x I did: ``` nvm use 16 ``` If you don't have nvm, you need to install it using brew: ``` brew install nvm ``` I know there could be multiple fixes for this error but this worked for me. Later edit: I also switched from node-sass to sass as node-sass is not maintained anymore and the migration is not that complex. You can have a look at how to do this [here](https://dev.to/ccreusat/migrating-from-node-sass-to-sass-dart-sass-with-npm-3g3c). Upvotes: 0 <issue_comment>username_21: macOS install gpu.js ==================== Running on MacOS following these install steps: 1. [install nvm](https://github.com/nvm-sh/nvm#installing-and-updating) `curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.3/install.sh | bash` then open a new shell session. 2. nvm install 16 3. nvm use 16 4. brew install yarn [OR] `npm install --global yarn` 5. yarn add gpu.js ```bash node gpu-test.js ``` Test file example `gpu-test.js`: ```js const { GPU } = require('gpu.js'); const gpu = new GPU(); const multiplyMatrix = gpu.createKernel(function(a, b) { let sum = 0; for (let i = 0; i < 2; i++) { sum += a[this.thread.y][i] * b[i][this.thread.x]; } return sum; }).setOutput([2, 2]); const a = new Array(2).fill(new Array(2).fill(1)); const c = multiplyMatrix(a, a); console.log(a); console.log(c); ``` [![output gpu.js on macOS](https://i.stack.imgur.com/mVIWV.png)](https://i.stack.imgur.com/mVIWV.png) Upvotes: 0 <issue_comment>username_22: This can also occurs if you have Python 3.11 installed, as mentioned here <https://stackoverflow.com/a/74732671/2630344> Upvotes: 2 <issue_comment>username_23: I ran into this issue from a web development context via node-sass or gulp-sass which uses the gyp library that fails to install. **tldr; too old, upgrade to sass** which will avoid the gyp issue Did some deep investigating on the issue. Several people may have ran into this issue because they're attempting to use `gulp-sass` which depends on `node-sass` in their development and along the way this gyp issue popped up. Initially in the gulp-sass installation an error popped up complaining that python2.7 was not installed in my computer. Once I installed that, the gyp issue showed up due to an incompatability of the newest node and gyp. My solution was to use the newest gulp with the newest sass. Hope this helps anyone who arrived here from web development. Upvotes: 0 <issue_comment>username_24: You probably need to change `node-sass` to `sass` in your `package.json` as stated on `node-sass`'s official page (see [here](https://www.npmjs.com/package/node-sass)): > > **Warning**: LibSass and Node Sass are deprecated. While they will continue to receive maintenance releases indefinitely, there are no plans to add additional features or compatibility with any new CSS or Sass features. Projects that still use it should move onto Dart Sass. > > > Upvotes: 0 <issue_comment>username_25: I found a solution: 1、install corresponding version of node-sass; see:<https://www.npmjs.com/package/node-sass> 2、install latest version of sass-loader; see:<https://www.npmjs.com/package/sass-loader> 3、install python and add into PATH; 4、run npm install; Upvotes: 0
2018/03/18
1,156
4,463
<issue_start>username_0: Update: it is a Google Play Service issue, reported internally [here](https://github.com/apache/cordova-plugin-geolocation) and it will be fixed from version 13.4.0 We use the [cordova gelocation plugin](https://github.com/apache/cordova-plugin-geolocation) and the method `navigator.geolocation.watchPosition()` with the option `enableHighAccuracy: true` to track the user location and get the most accurate results. Our app has been around for more than 1 year and we used to have no problems with any device getting a very good location accuracy, 4/6 meters when outside and the sky is clear. Lately, many of our users are reporting not being able to get anything less than 10m accuracy no matter what they do. We decided to test it ourselves and we found to have the same issue. Initially, we thought we introduced some bug in our latest release, we triple checked everything but we made no changes to code/dependencies involving geolocation. We tested older versions of our app as well, where we are sure it was possible to get like 4m accuracy, but surprisingly they do not work either, accuracy is capped at 10m. We tried different version of Android and we can reproduce the issue on any platform from 5 (Lollipop) to 8 (Oreo). We also have the same problem on iOS 10/11. Again, we have not updated the app in months. There is a recent question about the same issue [here](https://stackoverflow.com/questions/48925394/navigator-geolocation-getcurrentposition-in-cordova-gives-only-10-meter-accuracy): Someone else is having the same problem using Android native code [here](https://stackoverflow.com/questions/49044731/fusedlocationapi-performance-issue-accuracy-seems-capped-at-10-0-meters) Does anyone know what is going on? Is it a permission issue? Location Services are set to High Accuracy as well. For the sake of completeness, we are able to get 3/4 meters accuracy using the old version (2.x) of this [plugin](https://github.com/mauron85/cordova-plugin-background-geolocation/tree/2.x) Is it the only way to go? We would rather not introduce an extra dependency for something that was working so well out of the box. Many thanks<issue_comment>username_1: Looking at source code: Old plugin (2.x) [Source](https://github.com/apache/cordova-plugin-geolocation/blob/2.0.0/www/android/geolocation.js): ``` watchPosition: function(success, error, args) { var win = function() { var geo = cordova.require('cordova/modulemapper').getOriginalSymbol(window, 'navigator.geolocation'); geo.watchPosition(success, error, { enableHighAccuracy: args[1] }); }; exec(win, error, "Geolocation", "getPermission", []); }, ``` New Plugin (master) [Source](https://github.com/apache/cordova-plugin-geolocation/blob/master/www/android/geolocation.js): ``` watchPosition: function(success, error, args) { var pluginWatchId = utils.createUUID(); var win = function() { var geo = cordova.require('cordova/modulemapper').getOriginalSymbol(window, 'navigator.geolocation'); pluginToNativeWatchMap[pluginWatchId] = geo.watchPosition(success, error, args); }; var fail = function() { if (error) { error(new PositionError(PositionError.PERMISSION_DENIED, 'Illegal Access')); } }; exec(win, fail, "Geolocation", "getPermission", []); return pluginWatchId; }, ``` In OLD plugin code **enableHighAccuracy** is a boolean set by (arg1 of array). With NEW version of plugin you need to pass arg as JSON with that flag set: **{enableHighAccuracy: true}** to reproduce same call to **geo.watchPosition** function with high accuracy. **Old Way:** ``` navigator.geolocation.watchPosition(geolocationSuccess, geolocationError, [false,true]); ``` **New Way:** ``` navigator.geolocation.watchPosition(geolocationSuccess, geolocationError, { enableHighAccuracy: true }); ``` Upvotes: 2 <issue_comment>username_2: To whom it might concern, it is a Google Play Services issue, reported internally [here](https://issuetracker.google.com/u/0/issues/79189573) and it will be fixed from version 13.4.0 Update: solved after updating to Play Services 14.3.66, accuracy down to 4m again! Upvotes: 2 [selected_answer]
2018/03/18
585
2,065
<issue_start>username_0: Would anybody like to explain why this code is incorrect for template functions but works well for ordinary function. For instance, if we replace `std::copy` with non-template functions, no problem. How to change the code and make it valid for both template and non-template functions? ``` auto functionSpan = [](auto&& func, auto&&... args) { auto t1 = std::chrono::high_resolution_clock::now(); std::forward(func)(std::forward(args)...); auto t2 = std::chrono::high\_resolution\_clock::now(); return std::chrono::duration\_cast(t2-t1).count(); }; vector vec {1,2,3,4,5,6,7,8,9}; functionSpan(std::copy, vec.begin(), vec.end(), std::ostream\_iterator(std::cout, " ")); ```<issue_comment>username_1: std::copy does not refer to a single function. You need to specify `std::copy` Upvotes: 1 <issue_comment>username_2: The problem is that `std::copy` (and really, *any* `template` function) isn't a regular function, so the compiler can't derive it. What the compiler needs is a real function, although you could stamp out the the arguments to `std::copy`. The issue with explicitly stamping out the arguments is your code isn't generic anymore, plus it's tedious and error-prone when dealing with iterators. Instead, I'd suggest changing your function signatures a bit: ``` auto functionSpan = [](auto&& fn) { auto t1 = std::chrono::high_resolution_clock::now(); fn(); auto t2 = std::chrono::high_resolution_clock::now(); return std::chrono::duration_cast(t2-t1).count(); }; auto copy\_helper = [](auto && ... args) { return [args...]() { std::copy(args...); }; }; std::vector vec {1,2,3,4,5,6,7,8,9}; functionSpan(copy\_helper(vec.begin(), vec.end(), std::ostream\_iterator(std::cout, " ") )); ``` This compiles for me (I removed the `forward`ing to make my life simpler) and runs as expected. You'll have to write a specific helper for each `template` function, but it's less boilerplate than trying to stamp out the `template` arguments since you still get most of the type deduction. Upvotes: 0
2018/03/18
1,463
5,508
<issue_start>username_0: I'm very new at coding, and I'm trying to create a shop list with items and prices on it. That is, once typed in all the items, the function should calculate the sum and stop the moment you exceed the budget. So I wrote something like: ``` def shoplist(): list={"apple":30, "orange":20, "milk":60......} buy=str(input("What do you want to purchase?") If buy in list: While sum<=budget: sum=sum+?? shoplist () ``` I really don't know how to match the input of an item with the price in the list... My first thought is to use 'if', but it's kinda impractical when you have more than 10 items on the list and random inputs. I'm in desperate need of help....So any suggestions would be nice!! (or if you have a better solution and think me writing it this way is complete garbage... PLEASE let me know what those better solutions are<issue_comment>username_1: The code you post will not run in python. `list` is a builtin and should not be used for a variable name, and is doubly confusing since it refers to a dict object here. `input()` already returns a str so the cast has no effect. `if` and `while` should be lowercase, and there is no indentation, so we have no way of knowing the limits of those statements. There are so many things wrong, take a look at this: ``` def shoplist(budget): prices = {"apple":30, "orange":20, "milk":60} # Initialise sum sum = 0 while sum <= budget: buy = input("What do you want to purchase?") # Break out of the loop if the user hts if not buy: break if buy in prices: sum += prices[buy] # This gets the price else: print("Invalid item", buy) shoplist(142) ``` So what have I changed? The budget has to come from somewhere, so I pass it in as a parameter (142, I made that up). I initialise the sum to zero, and I moved the `while` loop to the outside. Notice as well lots of whitespace - it makes the code easier to read and has no effect on performance. Lots of improvements to make. The user should be shown a list of possible items and prices and also how much budget there is left for each purchase. Note as well that it is possible to go *over budget* since we might only have 30 in the budget but we can still buy milk (which is 60) - we need another check (`if` statement) in there! I'll leave the improvements to you. Have fun! Upvotes: 2 [selected_answer]<issue_comment>username_2: Take a look at this as an example: ``` # this is a dictionary not a list # be careful not using python reserved names as variable names groceries = { "apple":30, "orange":20, "milk":60 } expenses = 0 budget = 100 cart = [] # while statements, as well as if statements are in lower letter while expenses < budget: # input always returns str, no need to cast user_input = input("What do you want to purchase?") if user_input not in groceries.keys(): print(f'{user_input} is not available!') continue if groceries[user_input] > budget - expenses: print('You do not have enough budget to buy this') user_input = input("Are you done shopping?Type 'y' if you are.") if user_input == 'y': break continue cart.append(user_input) # this is how you add a number to anotherone expenses += groceries[user_input] print("Shopping cart full. You bought {} items and have {} left in your budget.".format(len(cart), budget-expenses)) ``` Upvotes: 0 <issue_comment>username_3: I've made some changes to your code to make it work, with explanation including using comments indicated by the `#` symbol. The two most important things are that all parentheses need to be closed: ``` fun((x, y) # broken fun((x, y)) # not broken ``` and keywords in Python are all lowercase: ``` if, while, for, not # will work If, While, For, Not # won't work ``` You might be confused by `True` and `False`, which probably should be lowercase. They've been that way so long that it's too late to change them now. ``` budget = 100 # You need to initialize variables before using them. def shoplist(): prices = { # I re-named the price list from list to prices 'apple' : 30, # because list is a reserved keyword. You should only 'orange' : 20, # use the list keyword to initialize list objects. 'milk' : 60, # This type of object is called a dictionary. } # The dots .... would have caused an error. # In most programming languages, you need to close all braces (). # I've renamed buy to item to make it clearer what that variable represents. item = input('What do you want to purchase? ') # Also, you don't need to cast the value of input to str; # it's already a str. if item in prices: # If you need an int, you do have to cast from string to int. count = int(input('How many? ')) cost = count*prices[item] # Access dictionary items using []. if cost > budget: print('You can\'t afford that many!') else: # You can put data into strings using the % symbol like so: print('That\'ll be %i.' % cost) # Here %i indicates an int. else: print('We don\'t have %s in stock.' % item) # Here %s means str. shoplist() ``` A lot of beginners post broken code on StackOverflow without saying that they're getting errors or what those errors are. It's always helpful to post the error messages. Let me know if you have more questions. Upvotes: 0
2018/03/18
1,043
4,678
<issue_start>username_0: So I've been trying to implement oAuth2 in a simple Spring MVC app. In the guide I was following, in their `AuthorizationServerConfigurerAdapter` they `@Autowired` an `AuthenticationManager`. They used Spring Boot version 1.5.2. I wanted to use Spring Boot 2.0.0 as this is the latest version so I wanted to learn the latest practices. However, in my pom.xml when I change: ``` org.springframework.boot spring-boot-starter-parent 1.5.2.RELEASE ``` to: ``` org.springframework.boot spring-boot-starter-parent 2.0.0.RELEASE ``` All of a sudden, I can't autowire `AuthenticationManager`. `Could not autowire. No beans of 'AuthenticationManager' type found.` Could someone come up with a solution to this? Thanks!<issue_comment>username_1: If you want to continue with boot starter packages, according to [release notes](https://github.com/spring-projects/spring-boot/wiki/Spring-Boot-2.0-Migration-Guide#authenticationmanager-bean) you need to override `authanticationManagerBean` method inside the `WebSecurityConfigurerAdapter` . Here code sample : ``` @Configuration @EnableWebSecurity public class WebSecurityConfig extends WebSecurityConfigurerAdapter { @Bean @Override public AuthenticationManager authenticationManagerBean() throws Exception { return super.authenticationManagerBean(); } } ``` Upvotes: 7 [selected_answer]<issue_comment>username_2: In latest version of Spring Boot `2.7.2` , class `WebSecurityConfigurerAdapter` is deprecated and you have to use new style to write security configurations , [Spring Security without the WebSecurityConfigurerAdapter](https://spring.io/blog/2022/02/21/spring-security-without-the-websecurityconfigureradapter) With that being said , something like below works for me with Spring Boot 2.7.2 . I have a JWT token filter that required plugged in to verify incoming JWT Tokens. Trying to highlight the usage of - `SecurityFilterChain` & `AuthenticationConfiguration` ``` import java.util.List; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.http.HttpMethod; import org.springframework.security.authentication.AuthenticationManager; import org.springframework.security.authentication.AuthenticationProvider; import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder; import org.springframework.security.config.annotation.authentication.configuration.AuthenticationConfiguration; import org.springframework.security.config.annotation.web.builders.HttpSecurity; import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity; import org.springframework.security.config.http.SessionCreationPolicy; import org.springframework.security.web.AuthenticationEntryPoint; import org.springframework.security.web.SecurityFilterChain; import org.springframework.security.web.authentication.AbstractAuthenticationProcessingFilter; import org.springframework.security.web.authentication.www.BasicAuthenticationFilter; import org.springframework.security.web.util.matcher.RequestMatcher; //import my custom jwt class package; import lombok.RequiredArgsConstructor; @Configuration @EnableWebSecurity @RequiredArgsConstructor public class WebSecurityConfig { private final AuthenticationConfiguration authConfiguration; @Bean public AuthenticationManager authenticationManager() throws Exception { return authConfiguration.getAuthenticationManager(); } @Autowired public void configure(AuthenticationManagerBuilder builder, AuthenticationProvider jwtAuthenticationProvider) { builder.authenticationProvider(jwtAuthenticationProvider); } @Bean public SecurityFilterChain configure(HttpSecurity http, AuthenticationEntryPoint authenticationEntryPoint, RequestMatcher requestMatcher) throws Exception { http.cors().and().csrf().disable().exceptionHandling().authenticationEntryPoint(authenticationEntryPoint).and() .sessionManagement().sessionCreationPolicy(SessionCreationPolicy.STATELESS).and().authorizeRequests() .antMatchers(HttpMethod.OPTIONS, "/**").permitAll() .antMatchers(HttpMethod.GET, List.of("/favicon.ico", "/**/*.html").toArray(new String[0])).permitAll(); AbstractAuthenticationProcessingFilter jwtFilter = new MyCustomClass(requestMatcher); jwtFilter.setAuthenticationManager(authenticationManager()); http.addFilterBefore(jwtFilter, BasicAuthenticationFilter.class); return http.build(); } } ``` Upvotes: 3
2018/03/18
569
1,508
<issue_start>username_0: I need a SQL statement where it first sorts by a interval of 1000 `Hours` steps and after that sorts the `Points` in DESC order. I cant figure out how to use `Case` or `Between`/`case` in this example. current result with `SELECT * FROM CurrencyUser ORDER BY Hours DESC, Points DESC LIMIT 6` ``` +--------+-------+ | Points | Hours | +--------+-------+ | 27 | 8005 | | 125 | 7200 | | 200 | 7100 | | 567 | 1070 | | 575 | 1050 | | 450 | 1020 | +--------+-------+ ``` and this is my wanted result ``` +--------+-------+ | Points | Hours | +--------+-------+ | 27 | 8005 | | 200 | 7100 | | 125 | 7200 | | 575 | 1050 | | 567 | 1070 | | 450 | 1020 | +--------+-------+ ``` both Points and Hours are normal integer<issue_comment>username_1: ```sql SELECT * FROM CurrencyUser ORDER BY cast(Hours/1000 as int) DESC, Points DESC LIMIT 6; ``` Upvotes: 1 [selected_answer]<issue_comment>username_2: Use either `FLOOR(Hours / 1000)` or `Hours DIV 1000` ``` SELECT * FROM CurrencyUser ORDER BY Hours DIV 1000 DESC, Points DESC ``` Note that casting to `UNSIGNED` will act as `ROUND()` and thus `7400` will be in the group `7` but `7600` in the group `8`. This case is not covered by your sample data. Run this test to see the difference: ``` set @Hours = 7600; select cast(@Hours/1000 as unsigned) , round(@Hours/1000) , floor(@Hours/1000) , @Hours DIV 1000 ``` Demo: <http://rextester.com/DZYZ80148> Upvotes: 1
2018/03/18
457
1,537
<issue_start>username_0: Assume type of 'a' is Vector3d and I wonder what's the type of 1-a.array(). I have this doubt because `a.cwiseProduct(b)` works but `(1-a.array()).cwiseProduct(b)` result in compilation error. What's the right way to write code like `(1-a.array()).cwiseProduct(b)`? ``` int main() { VectorXd a(3),b(3); a << 1,2,3; b << 2,3,4; // cout << (1-a.array()).cwiseProduct(b) << endl; //failed a = 1-a.array(); cout << a.cwiseProduct(b) << endl; //works return 0; } ```<issue_comment>username_1: `cwiseProduct` only works on matrix expressions, not on array (element wise) expressions (where the "cwise" would be redundant). The easy options are to either cast the `(1-a.array())` expression to a matrix expression `(1-a.array()).matrix().cwiseProduct(b)` (at compile time, so free); or better yet, if you want to do element wise stuff, just use all array objects `(1-a.array()) * b.array()` (again, the compile time casts are free). Upvotes: 3 [selected_answer]<issue_comment>username_2: The expression can be rewritten without using `.array()`, by replacing the `1` with a vector of ones: ``` (Vector3d::Ones() - a).cwiseProduct(b) ``` This version is arguably somewhat cleaner, as the operations are easier to recognize than in the version with `.array()` since the bracket contains a simple difference between two vectors. A further equivalent variant consists in using `.asDiagonal()` instead of `.cwiseProduct()` ``` (Vector3d::Ones() - a).asDiagonal() * b ``` Upvotes: 2
2018/03/18
254
830
<issue_start>username_0: I would like some help solving the error in the following google script code. Error given is "Missing ; before statement. (line 1, file "Code")" ``` Function onEdit(event) { var sheet = event.source.getActiveSheet(); var editedCell = sheet.getActiveCell(); var columnToSortBy = 13; var tableRange = "A4:R100"; if(editedCell.getColumn() == columnToSortBy){ var range = sheet.getRange(tableRange); range.sort( { column : columnToSortBy } ); } } ```<issue_comment>username_1: I think you made a typo here. The "f" in "Function" should in lowercase. Upvotes: 3 [selected_answer]<issue_comment>username_2: Try this: ``` function onEdit(e) { if(e.range.getColumn() == 2){ var range = e.range.getSheet().getRange("A1:B10"); range.sort({column:2}); } } ``` Upvotes: 0
2018/03/18
977
4,101
<issue_start>username_0: React native noob here , I wanna know if it's possible to use the FullTextSearch feature of sqlite in react, if so , tell me where i can learn more about it . Thanks !<issue_comment>username_1: Use **Realm Database for React Native**, Realm is an object-oriented database. OO model makes it 10x faster than SQLite and saves you from running tons of query which is a common thing for a typical SQL database and **fuse.js** may help you to search text. Upvotes: 2 [selected_answer]<issue_comment>username_2: ``` realm = new Realm({ schema: [StudentScheme] }) const mydata = realm.objects('Product_Info'); let filteredData = []; let keywords = search.split(" "); keywords.forEach((obj, index) => { let databaseSearchResult = mydata.filtered("prodName LIKE[c] $0 OR prodDesc LIKE[c] $0", "*" + obj + "*") Object.keys(databaseSearchResult).map(key => { filteredData.push(databaseSearchResult[key]) }) }); key} /> ``` ===================================================================== ``` updateSearch = search => { this.setState({search: search}, () => this.searchText(search.trim())); }; searchText = (search) => { console.log(" Detail Activity ------------- search -->" + search); realm = new Realm({ schema: [StudentScheme] }) const mydata = realm.objects('Product_Info'); let filteredData = {}; let keywords = search.split(" "); keywords.forEach((obj, index) => { let databaseSearchResult = mydata.filtered("prodName LIKE[c] $0 OR prodDesc LIKE[c] $0", "*" + obj + "*") Object.keys(databaseSearchResult).map(key => { filteredData[`${Object.keys(filteredData).length}`] = databaseSearchResult[key] }) }); this.setState({ filteredData }, () => { console.log('Search-------------------------------FILTER DATA', this.state.filteredData) let dataProvider = new DataProvider((r1, r2) => r1 !== r2) let updatedDataProvider=dataProvider.cloneWithRows(filteredData) this.setState({dataProvider: updatedDataProvider},()=>{ console.log("CALLBACKK ", this.state.dataProvider) }) }) } { Object.keys(this.state.filteredData).map((key)=>( this.rowRenderer(null, this.state.filteredData[key]) )) } ``` Upvotes: 0 <issue_comment>username_2: ``` var Realm = require('realm'); let realm; let dataProvider = new DataProvider((r1, r2) => r1 !== r2) realm = new Realm({ schema: [StudentScheme] }) state = { search: '', dataProvider: new DataProvider((r1, r2) => r1 !== r2).cloneWithRows({}), filteredData: {} }; updateSearch = search => { this.setState({search: search}, () => this.searchText(search.trim())); }; searchText = (search) => { console.log(" Detail Activity ------------- search -->" + search); const mydata = realm.objects('Product_Info'); let filteredData = []; let keywords = search.split(" "); keywords.forEach((obj, index) => { let databaseSearchResult = mydata.filtered("prodName LIKE[c] $0 OR prodDesc LIKE[c] $0 OR prodPrice LIKE[c] $0 ", "*" + obj + "*" ) Object.keys(databaseSearchResult).map(key => { filteredData.push(databaseSearchResult[key]) }) }); this.setState({ filteredData }, () => { console.log('Search-------------------------------FILTER DATA \n', this.state.filteredData) console.log('Search-------------------------------FILTER DATA--------------- \n'); }) } fetchDB = () => { var mydata = realm.objects('Product_Info'); this.setState({dataProvider: dataProvider.cloneWithRows(mydata), filteredData: mydata}) //TODO ... } key} /> ``` Upvotes: 0