date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2018/03/17
310
1,061
<issue_start>username_0: I have been working with a python program which uses `sounddevice` module to play audio. The program works fine in my office pc (running **Ubuntu 17.10**), but not in my home pc (running **Linux Mint 18.3**). It generates the following error: ``` Traceback (most recent call last): File "...path/to/my/code.py", line 11, in import sounddevice as sd File "/home/arif/anaconda3/lib/python3.6/site-packages/sounddevice.py", line 64, in raise OSError('PortAudio library not found') OSError: PortAudio library not found ``` How can I fix this problem?<issue_comment>username_1: I could fix this by installing the portaudio library. ``` sudo apt-get install libportaudio2 ``` You may also try following if this doesn't help. ``` sudo apt-get install libasound-dev ``` Upvotes: 6 <issue_comment>username_2: I have Rocky Linux 9, which is based on RHEL. I also fixed that by installing the portaudio package via this command: ``` dnf install portaudio portaudio-devel ``` It might be useful for Rocky Linux users. Upvotes: 0
2018/03/17
637
2,373
<issue_start>username_0: ``` { "578080": { "success": true, "data": { "type": "game", "name": "PLAYERUNKNOWN'S BATTLEGROUNDS", "steam_appid": 578080, "required_age": 0, "is_free": false, } } } ``` This is from the Steam API. As you can see the root key the ID itself, so I don't know how to deserialize this to an object. I've seen other questions regarding unknown property names, but can't seem to apply those solutions for when the root name is unknown.<issue_comment>username_1: One way to do this is to *Deserialize* to `Dictionary` **Classes** ``` public class Data { public string type { get; set; } public string name { get; set; } public int steam_appid { get; set; } public int required_age { get; set; } public bool is_free { get; set; } } public class SomeClass { public bool success { get; set; } public Data data { get; set; } } ``` **Usage** ``` var result = JsonConvert.DeserializeObject>(json); ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: If you don't care about making POCO models for your deserialized data and just want to grab some of the properties using a `dynamic`, you can use `JsonExtensionData` to get a `JToken` of the relevant subobject: ``` public class Foo { [JsonExtensionData] public Dictionary ExtensionData {get; set;} } dynamic obj = JsonConvert.DeserializeObject(json).ExtensionData.Single().Value; Console.WriteLine(obj.success); Console.WriteLine(obj.data.name); ``` This approach would be particularly useful if you could reuse `Foo` across several different types of responses since it doesn't care at all about the object schema. Upvotes: 1 <issue_comment>username_3: You can use an anonymous type deserialization to parse JSON data like this, without creating classes. I assumed there is only one Id("578080") present in your data.If more Id's present, you can create an array for those Id's. Hope It Works. ``` var finalResult=JsonConvert.DeserializeAnonymousType( yourdata, // input new { Id= { new { success="", data="" } } } ); console.write(finalResult.Id);// getting Id 578080 console.write(finalResult.Id.success); console.write(finalResult.Id.data.type); ``` Upvotes: 0
2018/03/17
447
2,030
<issue_start>username_0: I am trying to make and app which will backup my contact list to firebase database. I can store all contacts into the database, but when I save new numbers and hit sync button again, it pushes all contacts again. I have the ArrayList of my contacts now I want to check each and every contacts if it already exists in firebase or not. If it does not exist, it should be inserted. ``` public void SyncNow(View view) { final DatabaseReference root = FirebaseDatabase.getInstance().getReference(); for (i=0; i ``` This code is adding only the last contact of my UserContact ArrayList for size of arraylist. I have 108 contact it pushing the last contact 108 times. Someone please help.<issue_comment>username_1: To solve this, you need to do a little change in your database strcuture by saving those numbers as in the following database strcuture: ``` Firebase-root | --- phoneNumbers | --- phoneNumberOne | | | --- //details | --- phoneNumberTwo | | | --- //details | --- phoneNumberThree | --- //details ``` Now it's very simply to check a number for existens. ``` DatabaseReference rootRef = FirebaseDatabase.getInstance().getReference(); DatabaseReference phoneNumberToCheckRef = rootRef.child("phoneNumbers").child(phoneNumberToCheck"); ValueEventListener valueEventListener = new ValueEventListener() { @Override public void onDataChange(DataSnapshot dataSnapshot) { if(!dataSnapshot.exists()) { //Add number } } @Override public void onCancelled(DatabaseError databaseError) {} }; phoneNumberToCheckRef.addListenerForSingleValueEvent(valueEventListener); ``` Upvotes: 1 <issue_comment>username_2: you used for each loop then i think all record will be add that does not have firebase database .. ``` for (UserContact contact:storeContacts){ } ``` Upvotes: 0
2018/03/17
567
1,906
<issue_start>username_0: I'm using [jest](http://airbnb.io/enzyme/docs/guides/jest.html) for writing test cases. One of my function uses [uuid](https://www.npmjs.com/package/uuid) and due to which it is not a pure function. The code is something like this: ``` const myFunc = () => { const a = uuid(); return a; } ``` I'm writing my test case as : ``` test('should return a unique id value', () => { const a = uuid(); expect(myFunc()).toEqual(a); }); ``` Obviously it won't work as it will generate a unique Id every time. How can I write test case for such function. **[EDIT]** I don't want test the `uuid` function as it will always generate a new id and it is a library function which is already being tested. I want to test another function which uses `uuid`, generates new object and returns that. This function is used by some other function and so on which makes me not able to test any of that because the initial object has an id which is different that the id which I'm using while testing.<issue_comment>username_1: You can use `jest.mock` inorder to mock the import of `uuid`, like that: ``` const uuidMock = jest.fn().mockImplementation(() => { return 'my-none-unique-uuid'; }); jest.mock('uuid', () => { return uuidMock; }); ``` The only caveat of that approach is that you need to apply the mock in the test file before you are importing your real file. Then you will even able to assert on the mock. For more info read [jest.mock](https://facebook.github.io/jest/docs/en/es6-class-mocks.html). Upvotes: 4 [selected_answer]<issue_comment>username_2: I have found this works really nice. At the top of test file: ``` // your regular imports here jest.mock('uuid', () => ({ v4: () => '00000000-0000-0000-0000-000000000000' })); // describe('test suite here', () => { ``` in the actual code: ``` import { v4 as uuidv4 } from 'uuid'; ``` Upvotes: 4
2018/03/17
360
1,154
<issue_start>username_0: I have a `vector >graph` in my Edge object i have 3 int fields called height weight length. I have already populated my vector with elements and now i want to sort my vector in terms of weight from largest to smallest. How would i be able to do so.<issue_comment>username_1: You can use `jest.mock` inorder to mock the import of `uuid`, like that: ``` const uuidMock = jest.fn().mockImplementation(() => { return 'my-none-unique-uuid'; }); jest.mock('uuid', () => { return uuidMock; }); ``` The only caveat of that approach is that you need to apply the mock in the test file before you are importing your real file. Then you will even able to assert on the mock. For more info read [jest.mock](https://facebook.github.io/jest/docs/en/es6-class-mocks.html). Upvotes: 4 [selected_answer]<issue_comment>username_2: I have found this works really nice. At the top of test file: ``` // your regular imports here jest.mock('uuid', () => ({ v4: () => '00000000-0000-0000-0000-000000000000' })); // describe('test suite here', () => { ``` in the actual code: ``` import { v4 as uuidv4 } from 'uuid'; ``` Upvotes: 4
2018/03/17
805
3,022
<issue_start>username_0: I am working on a simple demo React project. I have a Home component which gets rendered when directly placed in `main.js` but when placed inside the `Router` in `Routes.js`, it is not rendered. Can anyone let me know what am I doing wrong here? `main.js` file ``` import React from "react"; import {render} from "react-dom"; import {App} from "./app/App"; import Routes from "./app/Routes"; render( , document.getElementById("root") ) ``` `Routes.js` file ``` import React from "react"; import { BrowserRouter as Router, Route, Switch, } from "react-router-dom"; import {App} from "./App"; import Home from "./components/Home"; export default function Routes(props) { console.log('Routes'); return ( ) } ``` `App.js` file ``` import React from "react"; import Header from "./components/Header"; export class App extends React.Component { render() { console.log("App render"); return ( Welcome to React ================= ) } } ``` `Header.js` file ``` import React, {Component} from 'react'; import {NavLink} from 'react-router-dom'; export default class Header extends Component { render() { console.log("Header render"); return ( Home ) } } ``` `Home.js` file ``` import React, {Component} from "react"; export default class Home extends Component { render() { console.log("Home render"); return ( Hello World! ------------ ) } } ```<issue_comment>username_1: This is because you are using App component as the wrapper of whole app, and defined the `Switch` as the children of App component, so you need to use `this.props.children` inside App. Like this: ``` export class App extends React.Component { render() { console.log("App render"); return ( Welcome to React ================= {this.props.children} ) } } ``` Consider this example to make the whole picture more clear, if you write: ``` ``` Means `Home` will get passed as children to App component, automatically it will not get rendered inside App, you need to put `this.props.children` somewhere inside App. Upvotes: 3 [selected_answer]<issue_comment>username_2: With react-router-v4 which you seem to be using, it is possible to have dynamic Routing which means you can add the Routes within nested components and hence apart from the solution that @MayankShukla suggested you could also keep the and other routes within `App` like ``` export default function Routes(props) { console.log('Routes'); return ( ) } export class App extends React.Component { render() { console.log("App render"); return ( Welcome to React ================= ) } } ``` You could read more about the **[advantages of Dynamic Routing here](https://stackoverflow.com/questions/48817305/advantages-of-dynamic-vs-static-routing-in-react/48960268#48960268)** Upvotes: 1
2018/03/17
585
2,239
<issue_start>username_0: I need to configure encrypted password for Jboss. Right now i have direct username password in oracle-ds.xml. <https://docs.jboss.org/jbosssecurity/docs/6.0/security_guide/html/Encrypting_Data_Source_Passwords.html> After going through above Url, I removed username and password from oracle-ds.xml and added a new policy in login-config.xml and mapped the policy in oralce-ds.xml. Its working fine, in case of single JNDI-name. (Single datasource) In another application i have 5 data sources in Oracle-ds.xml. 1) I tried adding different policies in login-config and mapped respective in oracle-ds.xml - But it is not working 2) And I tried adding login-module under single policy, - But failed Can someone help me on this?<issue_comment>username_1: This is because you are using App component as the wrapper of whole app, and defined the `Switch` as the children of App component, so you need to use `this.props.children` inside App. Like this: ``` export class App extends React.Component { render() { console.log("App render"); return ( Welcome to React ================= {this.props.children} ) } } ``` Consider this example to make the whole picture more clear, if you write: ``` ``` Means `Home` will get passed as children to App component, automatically it will not get rendered inside App, you need to put `this.props.children` somewhere inside App. Upvotes: 3 [selected_answer]<issue_comment>username_2: With react-router-v4 which you seem to be using, it is possible to have dynamic Routing which means you can add the Routes within nested components and hence apart from the solution that @MayankShukla suggested you could also keep the and other routes within `App` like ``` export default function Routes(props) { console.log('Routes'); return ( ) } export class App extends React.Component { render() { console.log("App render"); return ( Welcome to React ================= ) } } ``` You could read more about the **[advantages of Dynamic Routing here](https://stackoverflow.com/questions/48817305/advantages-of-dynamic-vs-static-routing-in-react/48960268#48960268)** Upvotes: 1
2018/03/17
748
2,215
<issue_start>username_0: **Here Iam trying to connect three tables so that i can get my desired out put** *Drupal 7 db select* ``` function abc($Incharge) { $res= db_select('node', 'n') ->Join('aa', 'f', 'f.id = n.nid') ->Join('bb', 'd', 'd.id = f.entity_id'); return $total_res = $res ->condition('n.type', 'ram') ->condition('d.target_id',$Incharge) ->condition('n.status', 1) ->fields('n', array('nid', 'title')) ->orderBy('n.title', 'ASC') ->execute()->fetchAllKeyed(); } ``` But i am facing a issue > > Fatal error: Call to a member function Join() on string in > /opt/lampp/htdocs/transgenic/sites/all/modules/report\_system/report\_system.module > on line 735 > > > [![enter image description here](https://i.stack.imgur.com/b6N6J.png)](https://i.stack.imgur.com/b6N6J.png)<issue_comment>username_1: I dont know more about drupal i found solution like this hope it will help you . `SelectQuery::join(), SelectQuery::leftJoin()` etc. don't return the query (they return the alias to the created JOIN), so they can't be chained. Just separate your code out like this: ``` $query = db_select('node', 'n') ->fields('l') ->fields('s', array('stamp', 'message')) ->orderBy('`order`', 'ASC'); $query->Join('aa', 'f', 'f.id = n.nid'); $result = $query->execute(); ``` Upvotes: 1 <issue_comment>username_2: According to the documentation (<https://www.drupal.org/docs/7/api/database-api/dynamic-queries/joins>) > > The return value of a join method is the alias of the table that was > assigned > > > And it also states - > > Joins cannot be chained, so they have to be called separately (see > Chaining). If you are chaining multiple functions together do it like > this: > > > So you have to do something like... ``` function abc($Incharge) { $res= db_select('node', 'n'); $res->Join('aa', 'f', 'f.id = n.nid'); $res->Join('bb', 'd', 'd.id = f.entity_id'); return $total_res = $res ->condition('n.type', 'ram') ->condition('d.target_id',$Incharge) ->condition('n.status', 1) ->fields('n', array('nid', 'title')) ->orderBy('n.title', 'ASC') ->execute()->fetchAllKeyed(); } ``` Upvotes: 3 [selected_answer]
2018/03/17
645
2,388
<issue_start>username_0: I want to finish first all results before going to another loops. How can I manage to achieve that? ``` function callback_Original(results, status) { if (status === google.maps.places.PlacesServiceStatus.OK) { for (var i = 0; i < results.length; i++) { createMarker_Original(results[i]); } } } ``` It always gives few places sometimes. ``` function createMarker_Original(place) { var photos = place.photos; if (!photos) { return; } var placeLoc = place.geometry.location; var marker = new google.maps.Marker({ map: map, position: place.geometry.location }); google.maps.event.addListener(marker, 'click', function() { infowindow.setContent('![](' + photos[0].getUrl({'maxWidth': 120, 'maxHeight': 120}) + ')**Name:** '+ place.name +' '+ '**Coordinates :** '+ place.geometry.location +' '+ '**Type:** '+ type +' '); infowindow.open(map, this); }); } ```<issue_comment>username_1: What about using Promise?(ES6 Code) ``` function callback_Original(results, status) { return new Promise((resolve, reject) => { if (status === google.maps.places.PlacesServiceStatus.OK) { for (var i = 0; i < results.length; i++) { createMarker_Original(results[i]); } resolve(); }else reject("Places service error"); }); } ``` And then just use ``` callback_Original(a,b) .then(response => { // Loop finished, what to do nexT? }) .catch(error => { // Error console.log(error); }); ``` Upvotes: 3 <issue_comment>username_2: use `async`, it'll wait for `promise`. [Reference](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/async_function) ``` function callback_Original_child(results, status) { return new Promise(resolve => { if (status === google.maps.places.PlacesServiceStatus.OK) { for (var i = 0; i < results.length; i++) { createMarker_Original(results[i]); } } }); } async function callback_Original(results, status) { try { await callback_Original_child(results, status); } catch (error) { console.log(error); } } callback_Original() ``` Upvotes: 1
2018/03/17
1,071
3,865
<issue_start>username_0: I see some strange code in our project like follows.I test it and get the right answer.But I think it is illegal,Can anyone explain this to me? ``` class Member { public: Member(): a(0),b(1) {} int a; int b; }; // contains `Member` as its first member class Container { public: Container(): c(0),d(0) {} Member getMemb(){return fooObject;} Member fooObject; int c; int d; }; ``` and how we use it: ``` int main() { auto ctain = new Container; auto meb = (Member *)ctain; // here! I think this is illegal cout << "a is " << meb->a << ", b is" << meb->b << endl; return 0; } ``` but I get the right answer, a is 0 and b is 1.Is this just a coincidence?I also noted that if `fooObject` is not the first member, I will get a wrong answser.<issue_comment>username_1: C++ Standard in part **12.2 Class members**: > > If a standard-layout class object has any non-static data members, its address is the same as the address of its first non-static data member. Otherwise, its address is the same as the address of its first base class subobject (if any). > > > Both your classes have standard-layout. So, observed behaviour agrees with Standard. But the cast `auto meb = (Member *)ctain;` breaks strict aliasing rule. Upvotes: 1 <issue_comment>username_2: It's not exactly coincidence, it happens that your fooObject is the first member of your Container class, so the beginning of it will rest at the same starting address as the Container object. If you do: ``` size_t s = offsetof(Container, Container::fooObject); ``` It will tell that your fooObject offset will be 0, which start where your Container object start in terms of memory, so when you cast to a Member pointer it's pointing to the correct address. But for instance in other cases you would be in big trouble for sure: ``` class Container { public: Container() : c(0),d(0) {} Member getMemb(){return fooObject;} int c; // fooObject isn't first member Member fooObject; int d; }; ``` Or was a virtual class, because virtual classes store an pointer for lookup into a table. ``` class Container { public: Container() : c(0),d(0) {} virtual ~Container() {} // Container is virtual, and has a vtable pointer // Meaning fooObject's offset into this class // most likely isn't 0 Member getMemb(){return fooObject;} Member fooObject; int c; int d; }; ``` Someone else will have to tell you whether this cast is legal even in your example, because I'm not sure. Upvotes: 2 <issue_comment>username_3: The snippet is legal. The [C style cast `(Member*)`](https://timsong-cpp.github.io/cppwp/expr.cast#4) here is effectively a `reinterpret_cast`. From [[basic.compound]](https://timsong-cpp.github.io/cppwp/basic.compound#4) > > Two objects `a` and `b` are pointer-interconvertible if: > > > * they are the same object, or > * one is a union object and the other is a non-static data member of that object, or > * one is a standard-layout class object and the other is the first non-static data member of that object, or, if the object has no non-static data members, the first base class subobject of that object, or [...] > > > If two objects are pointer-interconvertible, then they have the same address, and it is possible to obtain a pointer to one from a pointer to the other via a `reinterpret_­cast`. > > > Special care should be taken to make sure it is indeed a [standard layout type](https://timsong-cpp.github.io/cppwp/class#7), possibly with a `static_assert(std::is_standard_layout_v)` On the other hand, you could sidestep this entire fiasco if you just wrote `auto meb = &ctain.fooObject` Upvotes: 4 [selected_answer]
2018/03/17
628
1,942
<issue_start>username_0: Image stretches if I don't use object-fit contains. Stretches in width, losing aspect ratio. object-fit contain fixes that. The problem is, the element itself is not contained, just the visible image. Which means if I make the image clickable, the whole element area (even outside the image) is clickable. <https://jsfiddle.net/nyysyngp/10/> (or see code below) I just want the visible image to be clickable. This seems to work on Firefox, but not Chrome. ```css body, html { margin: 0; padding: 0; background-color: red; display: flex; height: 100%; width: 100%; } #media { display: flex; background-color: #262423; justify-content: center; align-items: center; flex-direction: column; flex-grow: 1; } #media_split { display: flex; flex-direction: column; width: 100%; height: 100%; align-items: center; } #media_image_container { height: 50%; width: 100%; flex-grow: 1; flex-shrink: 0; display: flex; align-items: center; justify-content: center; background-color: green; } #media_image { object-fit: contain; max-height: calc(100% - 4em); max-width: calc(100% - 4.7em); min-height: 100px; min-width: 100px; cursor: pointer; } #media_tv { height: 50%; width: 100%; flex-grow: 1; flex-shrink: 0; display: flex; align-items: center; justify-content: center; background-color:blue; } ``` ```html ![](https://i.imgur.com/F26h0tq.jpg) ```<issue_comment>username_1: In #media\_image\_container remove display: flex; and add text-align: center; It will fix the issue. Upvotes: 0 <issue_comment>username_2: Well some months later I found a solution. Just by adding "position: absolute" to #media\_image the problem went away, which in my case didn't break anything else. Upvotes: 2 [selected_answer]
2018/03/17
223
758
<issue_start>username_0: I need to count all records from database where due date matches today date. I found out that I should be able to do this using COUNT and CURDATE. But I am not able to get it right: ``` SELECT COUNT (id) FROM tasks WHERE due_date = CURDATE ```<issue_comment>username_1: I tried my self and this is working : ``` SELECT count(id) FROM `tasks ` WHERE due_date= CURRENT_DATE ``` **OR** ``` SELECT count(id) FROM `tasks ` WHERE due_date = CURDATE() ``` [Read this it will clear you Concepts](https://www.w3schools.com/sql/func_mysql_curdate.asp) Upvotes: 2 [selected_answer]<issue_comment>username_2: SELECT COUNT (id) FROM tasks WHERE due\_date = CURDATE Just remove space between COUNT and (id) will work properly. Upvotes: 2
2018/03/17
325
1,055
<issue_start>username_0: I have a **big dataset** like following: [![enter image description here](https://i.stack.imgur.com/5BFDp.png)](https://i.stack.imgur.com/5BFDp.png) There are so many rows like this format. Finding each NaN rows should base on the feature of NaN. In other words, these rows cannot be located directly df['Computer'] It needs find NaN first, and then return its row index to locate these rows. Therefore, I would like to get: [![enter image description here](https://i.stack.imgur.com/1Ge1y.png)](https://i.stack.imgur.com/1Ge1y.png)<issue_comment>username_1: I tried my self and this is working : ``` SELECT count(id) FROM `tasks ` WHERE due_date= CURRENT_DATE ``` **OR** ``` SELECT count(id) FROM `tasks ` WHERE due_date = CURDATE() ``` [Read this it will clear you Concepts](https://www.w3schools.com/sql/func_mysql_curdate.asp) Upvotes: 2 [selected_answer]<issue_comment>username_2: SELECT COUNT (id) FROM tasks WHERE due\_date = CURDATE Just remove space between COUNT and (id) will work properly. Upvotes: 2
2018/03/17
293
967
<issue_start>username_0: I'm using Firebase with JavaScript and I want to assign `user.fcmkey` to an outside variable key. The following is my code but doesn't access the key value and prints `NA`. How can I access snapshot value outside of the snapshot block. ``` var key="NA"; firebase.database().ref('users').child("1234567896").once('value', function(snap) { const user = snap.val() const userKey = snap.key; const myKey=user.fcmkey; key = myKey }); ```<issue_comment>username_1: I tried my self and this is working : ``` SELECT count(id) FROM `tasks ` WHERE due_date= CURRENT_DATE ``` **OR** ``` SELECT count(id) FROM `tasks ` WHERE due_date = CURDATE() ``` [Read this it will clear you Concepts](https://www.w3schools.com/sql/func_mysql_curdate.asp) Upvotes: 2 [selected_answer]<issue_comment>username_2: SELECT COUNT (id) FROM tasks WHERE due\_date = CURDATE Just remove space between COUNT and (id) will work properly. Upvotes: 2
2018/03/17
1,805
7,546
<issue_start>username_0: I am getting 'Invalid CORS request' when I try to `PutMapping` of my API in Postman. But it is working fine for 'POST' and 'GET' mapping. Why is it not working for the 'PUT' operation? My Spring Boot version: 2.0 This is my config: ``` @Override protected void configure(HttpSecurity http) throws Exception { http.cors().and().csrf().disable() .sessionManagement().sessionCreationPolicy(SessionCreationPolicy.STATELESS) .and() .authorizeRequests() .antMatchers("/h2-console/**/**").permitAll() .antMatchers(HttpMethod.GET,"/user/get-request").permitAll() .antMatchers(HttpMethod.POST,"/user/post-request").permitAll() .antMatchers(HttpMethod.PUT,"/user/put-request").permitAll() .and() .exceptionHandling().authenticationEntryPoint(jwtAuthenticationEntryPoint) .and() .addFilter(new JwtAuthenticationFilter(authenticationManager())) .addFilter(new JwtAuthorizationFilter(authenticationManager(), jwtUserDetailService)); } @Bean public WebMvcConfigurer corsConfigurer() { return new WebMvcConfigurerAdapter() { @Override public void addCorsMappings(CorsRegistry registry) { registry.addMapping("/**").allowedOrigins("*").allowedHeaders("*").exposedHeaders("Authorization"); } }; } ``` This is my controller : ``` @RestController @RequestMapping("/user") public class UserController { @PutMapping("/put-request") public void doResetPassword(@RequestBody String password) { System.out.println("PUT MAPPING"); } @PostMapping("/post-request") public void doResetPassword(@RequestBody String password) { System.out.println("POST MAPPING"); } @GetMapping("/get-request") public void doResetPassword() { System.out.println("GET MAPPING"); } } ```<issue_comment>username_1: ``` @Bean public CorsConfigurationSource corsConfigurationSource() { final CorsConfiguration configuration = new CorsConfiguration(); configuration.setAllowedOrigins(ImmutableList.of("*")); configuration.setAllowedMethods(ImmutableList.of("HEAD", "GET", "POST", "PUT", "DELETE", "PATCH", "OPTIONS")); configuration.setAllowCredentials(true); configuration.setAllowedHeaders(ImmutableList.of("*")); configuration.setExposedHeaders(ImmutableList.of("X-Auth-Token","Authorization","Access-Control-Allow-Origin","Access-Control-Allow-Credentials")); final UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource(); source.registerCorsConfiguration("/**", configuration); return source; } ``` I managed to allow cors request by adding this bean. You can configure setAllowedHeaders() and setExposedHeaders() by your need. Also, I added this line to my controller; ``` @RequestMapping(value = "/auth") @RestController @CrossOrigin(origins = "*") //this line public class AuthenticationController {..} ``` If your controller needs to handle on-the-fly OPTION request you can add this method to your controller. You can configure the value by your endpoint. ``` @RequestMapping(value = "/**/**",method = RequestMethod.OPTIONS) public ResponseEntity handle() { return new ResponseEntity(HttpStatus.OK); } ``` Upvotes: 3 <issue_comment>username_2: ```java @Configuration public class CrossOriginConfig { @Bean public WebMvcConfigurer corsConfigurer() { return new WebMvcConfigurer() { @Override public void addCorsMappings(CorsRegistry registry) { registry .addMapping("/**") .allowedMethods("HEAD", "GET", "POST", "PUT", "DELETE", "PATCH", "OPTIONS"); } }; } } ``` Upvotes: 5 [selected_answer]<issue_comment>username_3: If you are using a IIS server It was a problem with the WebDAVModule which seems to block PUT and DELETE methods by default! ``` ``` I really hope no one else pain with that! =] Fonte: <https://mozartec.com/asp-net-core-error-405-methods-not-allowed-for-put-and-delete-requests-when-hosted-on-iis/> Upvotes: 0 <issue_comment>username_4: In Spring with Kotlin I did the following: ``` @Bean fun corsConfigurationSource(): CorsConfigurationSource? { val source = UrlBasedCorsConfigurationSource() val corsConfig = CorsConfiguration() .applyPermitDefaultValues() .setAllowedOriginPatterns(listOf("*")) corsConfig.addAllowedMethod(HttpMethod.PUT) source.registerCorsConfiguration("/**", corsConfig) return source } ``` Upvotes: 0 <issue_comment>username_5: I just want to add 3 things. 1. The accepted answer and the one below it are wrong ways of doing CORS. If you are trying to configure CORS, that means you are trying to make your API accessible only by a number of clients you know. The lines ``` configuration.setAllowedOrigins(ImmutableList.of("*")); // from the first answer .addMapping("/**") // from the second answer ``` make the API accessible by any client. If that is what you want, you can just do the following with out a need to configure another bean ``` http.cors().disable() ``` 2. The issue in the question may happen when you allow origins with `http` and do your request using `https`. So be aware that those 2 are different. 3. Below is a working configuration ``` // In the import section import static org.springframework.security.config.Customizer.withDefaults; // In the HttpSecurity configuration http.cors(withDefaults()) @Bean public CorsConfigurationSource corsConfigurationSource() { final CorsConfiguration configuration = new CorsConfiguration(); configuration.setAllowedOrigins(Arrays.asList("http://localhost:4200", "https://localhost:4200")); configuration.setAllowedMethods(Arrays.asList("HEAD", "GET", "POST", "PUT", "DELETE", "PATCH", "OPTIONS")); configuration.setAllowCredentials(true); configuration.setAllowedHeaders(Arrays.asList("Content-Type", "X-Auth-Token","Authorization","Access-Control-Allow-Origin","Access-Control-Allow-Credentials")); configuration.setExposedHeaders(Arrays.asList("Content-Type", "X-Auth-Token","Authorization","Access-Control-Allow-Origin","Access-Control-Allow-Credentials")); final UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource(); source.registerCorsConfiguration("/**", configuration); return source; } ``` Upvotes: 0 <issue_comment>username_6: I'm using **Spring Security** and **Spring Boot 2.1.2**. In my specific case, the PUT call worked after I explicitly declared the "PUT" method in the setAllowedMethods() from CorsConfigurationSource bean. The headers can be chosen depending on the application behavior. ``` @Bean CorsConfigurationSource corsConfigurationSource() { final UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource(); final String headers = "Authorization, Access-Control-Allow-Headers, "+ "Origin, Accept, X-Requested-With, Content-Type, " + "Access-Control-Request-Method, Custom-Filter-Header"; CorsConfiguration config = new CorsConfiguration(); config.setAllowedMethods(Arrays.asList("GET","POST","PUT","DELETE")); // Required for PUT method config.addExposedHeader(headers); config.setAllowCredentials(true); config.applyPermitDefaultValues(); source.registerCorsConfiguration("/**", config); return source; } ``` Upvotes: 0
2018/03/17
1,411
5,932
<issue_start>username_0: Tracking or recording the position of an object when it is moved by a person from one place to another in a room with a camera. This makes the room look like an automatic warehouse, because it records objects' postion even they are moved by a person. I have no idea how to do this.<issue_comment>username_1: ``` @Bean public CorsConfigurationSource corsConfigurationSource() { final CorsConfiguration configuration = new CorsConfiguration(); configuration.setAllowedOrigins(ImmutableList.of("*")); configuration.setAllowedMethods(ImmutableList.of("HEAD", "GET", "POST", "PUT", "DELETE", "PATCH", "OPTIONS")); configuration.setAllowCredentials(true); configuration.setAllowedHeaders(ImmutableList.of("*")); configuration.setExposedHeaders(ImmutableList.of("X-Auth-Token","Authorization","Access-Control-Allow-Origin","Access-Control-Allow-Credentials")); final UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource(); source.registerCorsConfiguration("/**", configuration); return source; } ``` I managed to allow cors request by adding this bean. You can configure setAllowedHeaders() and setExposedHeaders() by your need. Also, I added this line to my controller; ``` @RequestMapping(value = "/auth") @RestController @CrossOrigin(origins = "*") //this line public class AuthenticationController {..} ``` If your controller needs to handle on-the-fly OPTION request you can add this method to your controller. You can configure the value by your endpoint. ``` @RequestMapping(value = "/**/**",method = RequestMethod.OPTIONS) public ResponseEntity handle() { return new ResponseEntity(HttpStatus.OK); } ``` Upvotes: 3 <issue_comment>username_2: ```java @Configuration public class CrossOriginConfig { @Bean public WebMvcConfigurer corsConfigurer() { return new WebMvcConfigurer() { @Override public void addCorsMappings(CorsRegistry registry) { registry .addMapping("/**") .allowedMethods("HEAD", "GET", "POST", "PUT", "DELETE", "PATCH", "OPTIONS"); } }; } } ``` Upvotes: 5 [selected_answer]<issue_comment>username_3: If you are using a IIS server It was a problem with the WebDAVModule which seems to block PUT and DELETE methods by default! ``` ``` I really hope no one else pain with that! =] Fonte: <https://mozartec.com/asp-net-core-error-405-methods-not-allowed-for-put-and-delete-requests-when-hosted-on-iis/> Upvotes: 0 <issue_comment>username_4: In Spring with Kotlin I did the following: ``` @Bean fun corsConfigurationSource(): CorsConfigurationSource? { val source = UrlBasedCorsConfigurationSource() val corsConfig = CorsConfiguration() .applyPermitDefaultValues() .setAllowedOriginPatterns(listOf("*")) corsConfig.addAllowedMethod(HttpMethod.PUT) source.registerCorsConfiguration("/**", corsConfig) return source } ``` Upvotes: 0 <issue_comment>username_5: I just want to add 3 things. 1. The accepted answer and the one below it are wrong ways of doing CORS. If you are trying to configure CORS, that means you are trying to make your API accessible only by a number of clients you know. The lines ``` configuration.setAllowedOrigins(ImmutableList.of("*")); // from the first answer .addMapping("/**") // from the second answer ``` make the API accessible by any client. If that is what you want, you can just do the following with out a need to configure another bean ``` http.cors().disable() ``` 2. The issue in the question may happen when you allow origins with `http` and do your request using `https`. So be aware that those 2 are different. 3. Below is a working configuration ``` // In the import section import static org.springframework.security.config.Customizer.withDefaults; // In the HttpSecurity configuration http.cors(withDefaults()) @Bean public CorsConfigurationSource corsConfigurationSource() { final CorsConfiguration configuration = new CorsConfiguration(); configuration.setAllowedOrigins(Arrays.asList("http://localhost:4200", "https://localhost:4200")); configuration.setAllowedMethods(Arrays.asList("HEAD", "GET", "POST", "PUT", "DELETE", "PATCH", "OPTIONS")); configuration.setAllowCredentials(true); configuration.setAllowedHeaders(Arrays.asList("Content-Type", "X-Auth-Token","Authorization","Access-Control-Allow-Origin","Access-Control-Allow-Credentials")); configuration.setExposedHeaders(Arrays.asList("Content-Type", "X-Auth-Token","Authorization","Access-Control-Allow-Origin","Access-Control-Allow-Credentials")); final UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource(); source.registerCorsConfiguration("/**", configuration); return source; } ``` Upvotes: 0 <issue_comment>username_6: I'm using **Spring Security** and **Spring Boot 2.1.2**. In my specific case, the PUT call worked after I explicitly declared the "PUT" method in the setAllowedMethods() from CorsConfigurationSource bean. The headers can be chosen depending on the application behavior. ``` @Bean CorsConfigurationSource corsConfigurationSource() { final UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource(); final String headers = "Authorization, Access-Control-Allow-Headers, "+ "Origin, Accept, X-Requested-With, Content-Type, " + "Access-Control-Request-Method, Custom-Filter-Header"; CorsConfiguration config = new CorsConfiguration(); config.setAllowedMethods(Arrays.asList("GET","POST","PUT","DELETE")); // Required for PUT method config.addExposedHeader(headers); config.setAllowCredentials(true); config.applyPermitDefaultValues(); source.registerCorsConfiguration("/**", config); return source; } ``` Upvotes: 0
2018/03/17
213
834
<issue_start>username_0: Am new to odoo , Am searching for a way to create/ add new types to the chart of account in odoo v 10 , when i create new chart of account only few options listed for the types how can i do that ? Thank you<issue_comment>username_1: Create a `menuitem and action` of `account.account.type` model, then you can create/edit records. Try this code, ```xml account.account.type account.account.type tree,form ``` Upvotes: 0 <issue_comment>username_2: The Odoo reporting framework doesn't know about custom account types. You have to be a VERY advanced user (with a VERY strong background in Accounting) to make custom account types work. It is a much better practice to use the reporting framework to group or otherwise segregate standard account types if you really need to split things. Upvotes: 1
2018/03/17
2,241
8,412
<issue_start>username_0: **testAjax function** *inside* **PostsController class**: ``` public function testAjax(Request $request) { $name = $request->input('name'); $validator = Validator::make($request->all(), ['name' => 'required']); if ($validator->fails()){ $errors = $validator->errors(); echo $errors; } else{ echo "welcome ". $name; } } ``` inside **web.php** file: ``` Route::get('/home' , function(){ return view('ajaxForm'); }); Route::post('/verifydata', 'PostsController@testAjax'); ``` **ajaxForm.blade.php:** ``` Name $(document).ready(function(){ $("#submit").click(function(){ var name = $("#name").val(); var token = $("#token").val(); /\*\*Ajax code\*\*/ $.ajax({ type: "post", url:"{{URL::to('/verifydata')}}", data:{name:name, \_token: token}, success:function(data){ //console.log(data); $('#success\_message').fadeIn().html(data); } }); /\*\*Ajax code ends\*\*/ }); }); ``` So when click on submit button by entering some data then the output message(*echo "welcome ". $name;*) is printing. But when I click on submit button with empty text box then it does not print the error message from the controller and it throws a 422 (Unprocessable Entity) error in console. Why my approach is wrong here and how can I print the error message then. Please help. Thank you in advance. [![enter image description here](https://i.stack.imgur.com/5HNOK.png)](https://i.stack.imgur.com/5HNOK.png) [![enter image description here](https://i.stack.imgur.com/SrTP0.png)](https://i.stack.imgur.com/SrTP0.png)<issue_comment>username_1: Your approach is actually not wrong, it's just, you need to catch the error response on your ajax request. Whereas, when Laravel validation fails, it throws an `Error 422 (Unprocessable Entity)` with corresponding error messages. ``` /**Ajax code**/ $.ajax({ type: "post", url: "{{ url('/verifydata') }}", data: {name: name, _token: token}, dataType: 'json', // let's set the expected response format success: function(data){ //console.log(data); $('#success_message').fadeIn().html(data.message); }, error: function (err) { if (err.status == 422) { // when status code is 422, it's a validation issue console.log(err.responseJSON); $('#success_message').fadeIn().html(err.responseJSON.message); // you can loop through the errors object and show it to the user console.warn(err.responseJSON.errors); // display errors on each form field $.each(err.responseJSON.errors, function (i, error) { var el = $(document).find('[name="'+i+'"]'); el.after($(''+error[0]+'')); }); } } }); /**Ajax code ends**/ ``` On your controller ``` public function testAjax(Request $request) { // this will automatically return a 422 error response when request is invalid $this->validate($request, ['name' => 'required']); // below is executed when request is valid $name = $request->name; return response()->json([ 'message' => "Welcome $name" ]); } ``` Upvotes: 5 <issue_comment>username_2: Here's a better approach to validation: In your controller: ``` public function testAjax(Request $request) { $this->validate($request, [ 'name' => 'required' ]); return response("welcome ". $request->input('name')); } ``` The framework then will create a validator for you and validate the request. It will throw a [`ValidationException`](https://github.com/laravel/framework/blob/5.6/src/Illuminate/Validation/ValidationException.php) if it fails validation. Assuming you have not overriden how the validation exception is rendered here's the default code [the built-in exception handler](https://github.com/laravel/framework/blob/5.3/src/Illuminate/Foundation/Exceptions/Handler.php#L213) will run ``` protected function convertValidationExceptionToResponse(ValidationException $e, $request) { if ($e->response) { return $e->response; } $errors = $e->validator->errors()->getMessages(); if ($request->expectsJson()) { return response()->json($errors, 422); } return redirect()->back()->withInput($request->input())->withErrors($errors); } ``` Again this is handled for you by the framework. On the client side you should be able to do: ``` $(document).ready(function(){ $("#submit").click(function(){ var name = $("#name").val(); var token = $("#token").val(); /\*\*Ajax code\*\*/ $.ajax({ type: "post", url:"{{URL::to('/verifydata')}}", data:{name:name, \_token: token}, success:function(data){ //console.log(data); $('#success\_message').fadeIn().html(data); }, error: function (xhr) { if (xhr.status == 422) { var errors = JSON.parse(xhr.responseText); if (errors.name) { alert('Name is required'); // and so on } } } }); /\*\*Ajax code ends\*\*/ }); }); ``` Upvotes: 2 <issue_comment>username_3: best way for handle in php controller : ``` $validator = \Validator::make($request->all(), [ 'footballername' => 'required', 'club' => 'required', 'country' => 'required', ]); if ($validator->fails()) { return response()->json(['errors'=>$validator->errors()->all()]); } return response()->json(['success'=>'Record is successfully added']); ``` Upvotes: 2 <issue_comment>username_4: The code for form validation in Vannilla Javascript ``` const form_data = new FormData(document.querySelector('#form_data')); fetch("{{route('url')}}", { 'method': 'post', body: form_data, }).then(async response => { if (response.ok) { window.location.reload(); } const errors = await response.json(); var html = ''; for (let [key, error] of Object.entries(errors)) { for (e in error) { html += `* ${error[e]} `; } } html += ' '; //append html to some div throw new Error("error"); }) .catch((error) => { console.log(error) }); ``` Controller ``` use Illuminate\Support\Facades\Validator;//Use at top of the page $rules = [ 'file' => 'image|mimes:jpeg,png,jpg|max:1024', 'field1' => 'required', 'field2' => 'required' ]; $validator = Validator::make($request->post(), $rules); if ($validator->fails()) { return response()->json($validator->errors(), 400); } session()->flash('flash', ['status' => 'status', 'message' => 'message']); ``` Upvotes: 1 <issue_comment>username_5: **Jquery Code:** ``` let first_name= $('.first_name').val(); let last_name= $('.last_name').val(); let email= $('.email').val(); let subject= $('.subject').val(); let message= $('.message').val(); $('.show-message').empty(); console.log('clicked'); $.ajax({ type : 'POST', url : '{{route("contact-submit")}}', headers: { 'X-CSRF-TOKEN': $('meta[name="csrf-token"]').attr('content') }, data: { first_name, last_name, email, subject, message, }, success: function(data) { console.log('data',data); $('.show-message').html('Form Submitted'); }, error : function(data,data2,data3) { let response=data.responseJSON; let all_errors=response.errors; console.log('all_errors',all_errors); $.each(all_errors,function(key,value){ $('.show-message').append(`${value} `); }); } }); ``` **Controller Code:** ``` $validator=Validator::make($request->all(),[ 'first_name'=>'required', 'last_name'=>'required', 'email'=>'required|email', 'subject'=>'required', 'message'=>'required', ]); if($validator->fails()) { return response()->json([ 'success'=>false, 'errors'=>($validator->getMessageBag()->toArray()), ],400); } return response()->json([ 'success'=>true, ],200); ``` See More Details at: <https://impulsivecode.com/validate-input-data-using-ajax-in-laravel/> Upvotes: 0
2018/03/17
528
1,539
<issue_start>username_0: Here is how I get two plot handler which will draw on the same graphic(axes). ``` figureHandle = figure('NumberTitle','off',... 'Name','RFID Characteristics',... 'Color',[0 0 0],'Visible','off'); axesHandle = axes('Parent',figureHandle,... 'YGrid','on',... 'YColor',[0.9725 0.9725 0.9725],... 'XGrid','on',... 'XColor',[0.9725 0.9725 0.9725],... 'Color',[0 0 0]); hold on; xData = 0; yData=0; plotHandle1 = plot(axesHandle,xData,yData,'Marker','.','LineWidth',1,'Color',[0 1 0]); plotHandle2 = plot(axesHandle,xData,yData,'Marker','.','LineWidth',1,'Color',[1 0 0]); ``` This is how I recursively use to draw real-time data. ``` set(plotHandle1,'YData',newestTag2Data(5,:),'XData',newestTag2Data(1,:)); hold on set(plotHandle2,'YData',newestTag3Data(5,:),'XData',newestTag3Data(1,:)); hold off set(figureHandle,'Visible','on'); drawnow; ``` However, I only see the plotHandle2, not plotHandle1. Seems hold on does not work here.<issue_comment>username_1: `hold on` works on active axes, to hold your specific axes use: ``` hold(axesHandle,'on') ``` You can make sure that you got 2 plots if you look at: ``` axesHandle.Children ``` Upvotes: 2 <issue_comment>username_2: Just a quick add up. You can use ``` get(axesHandle.Children); ``` to see all the properties of the line on the axesHandle. If you have multiple lines on one axesHandle, use ``` get(axesHandle.Children(1)); get(axesHandle.Children(2)); ``` Upvotes: 0
2018/03/17
454
1,335
<issue_start>username_0: I am writing a program in C to separate the hour and minutes using arrays. However, the program gives me some outputs which exceed the length of the defined array. Can someone explain to me? I want the two array only include hour and the rest of the time without (AM/PM). ``` #include #include #include #include #include #include #include int main() { char\* s = (char \*)malloc(10 \* sizeof(char)); s="11:22:33AM"; printf("The time is %s\n",s); char Hour[2]; char Minutes[6]; int i; printf("The hour is %s\n",Hour); printf("The minute is %s\n",Minutes); for (i=0;i<2;i++){ Hour[i]=s[i]; } for (i=2;i<8;i++){ Minutes[i-2]=s[i]; } printf("%d\n",sizeof(Hour)); printf("%d\n",sizeof(Minutes)); printf("The hour is %s\n",Hour); printf("The minute is %s\n",Minutes); } ```<issue_comment>username_1: `hold on` works on active axes, to hold your specific axes use: ``` hold(axesHandle,'on') ``` You can make sure that you got 2 plots if you look at: ``` axesHandle.Children ``` Upvotes: 2 <issue_comment>username_2: Just a quick add up. You can use ``` get(axesHandle.Children); ``` to see all the properties of the line on the axesHandle. If you have multiple lines on one axesHandle, use ``` get(axesHandle.Children(1)); get(axesHandle.Children(2)); ``` Upvotes: 0
2018/03/17
435
1,672
<issue_start>username_0: ``` select title from Movie M , Rating R where exists((select M.mID from Movie) except (select R.mID from Rating)); ``` > > Error: near "(": syntax error > > ><issue_comment>username_1: Do the proper `JOIN`s with `LEFT OUTER JOIN` ``` SELECT m.* FROM Movie m LEFT OUTER JOIN Rating r ON r.mID = m.mID WHERE r.mID IS NULL ``` Upvotes: 1 <issue_comment>username_2: Parentheses are used for subqueries, but a [compound query](http://www.sqlite.org/lang_select.html#compound) is not composed of subqueries, so you must write both SELECTs together: ``` SELECT title FROM Movie M , Rating R WHERE EXISTS (SELECT M.mID FROM Movie EXCEPT SELECT R.mID FROM Rating); ``` But while this query is syntactically valid, it still does not make sense. EXISTS just checks whether the subquery on the rights side return any rows; this usually requires a [correlated subquery](http://www.sqlite.org/lang_expr.html#cosub) to make the subquery depend on the current row in the outer query. And it does not make sense to have the `Rating` table in the outer query. You should use IN instead of EXISTS: ``` SELECT title FROM Movie WHERE mID IN (SELECT mID FROM Movie EXCEPT SELECT mID FROM Rating); ``` And you already know that all IDs in the `Movie` table exist in the `Movie` table, so you do not have to repeat it in the subquery; simply reverse the comparison: ``` SELECT title FROM Movie WHERE mID NOT IN (SELECT mID FROM Rating); ``` Upvotes: 0
2018/03/17
1,667
5,957
<issue_start>username_0: While entering data to firebase database I am facing this error Uncaught (in promise ) : [object Object]. I am able to upload image to storage. But when other data like email and password are not being enter. Here I have created my own table to store user data [![enter image description here](https://i.stack.imgur.com/82hQk.png)](https://i.stack.imgur.com/82hQk.png) Please help register.html ``` Registration Email Address Password Select an image ![]() Register ``` register.ts ``` import { Component } from '@angular/core'; import { FormBuilder, FormGroup, Validators } from '@angular/forms'; import { IonicPage, NavController, NavParams, ViewController } from 'ionic-angular'; import { UserProvider } from './../../../providers/database/user/user'; import { PreloaderProvider } from './../../../providers/preloader/preloader'; import { ImageProvider } from './../../../providers/image/image'; import { User } from '../../../models/user'; import * as firebase from 'firebase'; /** * Generated class for the RegistrationPage page. * * See https://ionicframework.com/docs/components/#navigation for more info on * Ionic pages and navigation. */ @IonicPage() @Component({ selector: 'page-registration', templateUrl: 'registration.html', }) export class RegistrationPage { public form: any; public userImage: any; public users: any; public userEmail : any = ' '; public userPassword : any = ''; public userPic : any = ' '; public userId : string = ' '; constructor( private _FB: FormBuilder, private _IMG: ImageProvider, public viewCtrl: ViewController, private _LOADER: PreloaderProvider, private _DB: UserProvider, public navCtrl: NavController, public navParams: NavParams) { this.form = _FB.group({ 'email' : [' ', Validators.required], 'password' : [' ', Validators.required], 'image' : [' ', Validators.required] }); this.users = firebase.database().ref('users/'); } saveUser(val) { this._LOADER.displayPreloader(); let email: string = this.form.controls["email"].value, password: string = this.form.controls["password"].value, image : string = this.userImage; console.log(email + password + image); this._DB.uploadImage(image) .then((snapshot : any) => { let uploadImage : any = snapshot.downloadURL; this._DB.addToDatabase({ email : email, password : <PASSWORD>, image : uploadImage }) .then((data)=> { this._LOADER.hidePreloader(); }); }); } selectImage() { this._IMG.selectImage() .then((data) => { this.userImage = data; }); } } ``` provider/database/user.ts ``` import { Injectable } from '@angular/core'; import { Http } from '@angular/http'; import 'rxjs/add/operator/map'; import { Observable } from 'rxjs/Observable'; import * as firebase from 'firebase'; /* Generated class for the UserProvider provider. See https://angular.io/guide/dependency-injection for more info on providers and Angular DI. */ @Injectable() export class UserProvider { constructor(public http: Http) { console.log('Hello UserProvider Provider'); } addToDatabase(userObj): Promise { return new Promise((resolve) => { let addRef = firebase.database().ref('users'); addRef.push(userObj); resolve(true); }); } updateDatabase(id, userObj) : Promise { return new Promise((resolve) => { var updateRef = firebase.database().ref('users').child(id); updateRef.update(userObj); resolve(true); }); } deleteDatabase(id) : Promise { return new Promise((resolve) => { let ref = firebase.database().ref('users').child(id); ref.remove(); resolve(true); }); } uploadImage(imageString) : Promise { let image : string = 'user-' + new Date().getTime() + '.jpg', storageRef : any, parseUpload : any; return new Promise((resolve, reject) => { storageRef = firebase.storage().ref('users/' + image); parseUpload = storageRef.putString(imageString, 'data\_url'); parseUpload.on('stage\_change', (\_snapshot) => { }, (\_err) => { reject(\_err); }, (success) => { resolve(parseUpload.snapshot); }); }); } } ```<issue_comment>username_1: **Hi, Maybe this will help you what I did in my case to upload an image in firebase storage.** .html File ``` ``` .ts file ``` capturePicGallery(event){ this.imagPathSrc = event.srcElement.files[0]; firebase.storage().ref().child(pathStoreImage).put(this.imagPathSrc).then((snapshot) => { console.log("snapshot.downloadURL" ,snapshot.downloadURL); }); } ``` Upvotes: 2 <issue_comment>username_2: ``` accessGallery(sourceType:number) { /** this.accessGallery(0);//photo library this.accessGallery(1);//camera **/ const options: CameraOptions = { quality: 100, destinationType: this.camera.DestinationType.DATA_URL, encodingType: this.camera.EncodingType.JPEG, mediaType: this.camera.MediaType.PICTURE, correctOrientation: true, sourceType:sourceType, } this.camera.getPicture(options).then((imageData) => { // imageData is either a base64 encoded string or a file URI // If it's base64: this.base64Image = 'data:image/jpeg;base64,' + imageData; this.images.push(this.base64Image); this.upload(this.base64Image); }, (err) => { // Handle error }); } upload(event) { // way 1 (not working) // let data = new File(event,'demo.jpg'); // console.log("data",data); // this.afStorage.upload('/products/', data); // way 2 this.afStorage.ref(`products/${Date.now()}.jpeg`).putString(event, 'data_url') .then(url => console.log("upload success",url)) } ``` then you can access camera or get image from library and upload it ``` select image then upload capture image then upload ``` [![enter image description here](https://i.stack.imgur.com/HN4X0.jpg)](https://i.stack.imgur.com/HN4X0.jpg) Upvotes: 0
2018/03/17
819
2,652
<issue_start>username_0: Currently trying to run a bash script on startup to automatically install squid, however the command I'm running requires input. Currently the script i have is: ``` #!/bin/sh PROXY_USER=user1 PROXY_PASS=<PASSWORD> wget https://raw.githubusercontent.com/hidden-refuge/spi/master/spi && bash spi -rhel7 && rm spi #After i run this command it asks "Enter username" #followed by "Enter password" and "Renter password" echo $PROXY_USER echo $PROXY_PASS echo $PROXY_PASS echo yes ``` However i am unable to get the input working, and the script fails to create a username and password. I'm running centos 7.<issue_comment>username_1: Look you are calling some tools which act in interactive mode, so as [dani-gehtdichnixan](https://stackoverflow.com/users/1974371/dani-gehtdichnixan) mentioned at ([passing arguments to an interactive program non interactively](https://stackoverflow.com/questions/14392525/passing-arguments-to-an-interactive-program-non-interactively)) you can use `expect` utilities. Install `expect` at debian: ``` apt-get install expect ``` Create a script call `spi-install.exp` which could look like this: ``` #!/usr/bin/env expect set user username set pass <PASSWORD> spawn spi -rhel7 expect "Enter username" send "$user\r" expect "Renter password" send "$pass\r" ``` Then call it at your main bash script: ``` #!/bin/bash wget https://raw.githubusercontent.com/hidden-refuge/spi/master/spi && ./spi-install.exp && rm spi ``` > > Expect is used to automate control of interactive applications such as Telnet, FTP, passwd, fsck, rlogin, tip, SSH, and others. Expect uses pseudo terminals (Unix) or emulates a console (Windows), starts the target program, and then communicates with it, just as a human would, via the terminal or console interface. Tk, another Tcl extension, can be used to provide a GUI. > > > <https://en.wikipedia.org/wiki/Expect> Reference : [1] [passing arguments to an interactive program non interactively](https://stackoverflow.com/questions/14392525/passing-arguments-to-an-interactive-program-non-interactively) [2] <https://askubuntu.com/questions/307067/how-to-execute-sudo-commands-with-expect-send-commands-in-bash-script> [3] <https://superuser.com/questions/488713/what-is-the-meaning-of-spawn-linux-shell-commands-centos6> Upvotes: 1 <issue_comment>username_2: Try just passing the values to bash's stdin ``` #!/bin/sh PROXY_USER=user1 PROXY_PASS=<PASSWORD> if wget https://raw.githubusercontent.com/hidden-refuge/spi/master/spi; then printf "%s\n" "$PROXY_USER" "$PROXY_PASS" "$PROXY_PASS" yes | bash spi -rhel7 rm spi fi ``` Upvotes: 0
2018/03/17
710
2,261
<issue_start>username_0: I have two tables which have the same structure but another names (in first table I store default values, in second table I store saved values by user). I select these values using union all: ``` SELECT * FROM `table_default` UNION ALL SELECT * FROM `table_saved` ``` Structure of table\_default: ``` | ID | SOME_VAL | | 1 | def_val | | 2 | def_val | | 3 | def_val | ``` Structure of table\_saved: ``` | ID | SOME_VAL | | 1 | test | | 3 | text | ``` And now, when I using this query: ``` SELECT * FROM `table_default` UNION ALL SELECT * FROM `table_saved` ``` I got: ``` | ID | SOME_VAL | | 1 | def_val | | 2 | def_val | | 3 | def_val | | 1 | test | | 3 | text | ``` But I want to get unique values by ID. Table\_saved is more important so when select return duplicates I want to remove always record from table\_default. So finally I want to get: ``` | ID | SOME_VAL | | 2 | def_val | --> from TABLE_DEFAULT because this record (by ID) is not exist in table_saved | 1 | test | --> from TABLE_SAVED | 3 | text | --> from TABLE_SAVED ``` I can't use GROUP BY `id` because I don't know which record will be remove (sometime GROUP BY remove duplicate from table\_default but sometimes GROUP BY also remove duplicates from table\_saved) so I can't manage this. Is it possible to remove duplicates (something like GROUP BY) using table name and row name ? Or maybe somebody has another idea. Please help. Thanks.<issue_comment>username_1: If I understand correctly, you want to always retain all records from `table_saved`, plus records from `table_default` having IDs not appearing in `table_saved`. One approach is to use a left join to find the unique records from `table_default`. Then union that with all records from `table_saved`. ``` SELECT t1.ID, t1.SOME_VAL FROM table_default t1 LEFT JOIN table_saved t2 ON t1.ID = t2.ID WHERE t2.ID IS NULL UNION ALL SELECT ID, SOME_VAL FROM table_saved; ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: If a default value is always present you could use a LEFT JOIN and COALESCE: ``` SELECT d.ID, COALESCE(s.SOME_VAL, d.SOME_VAL) AS SOME_VAL FROM table_default d LEFT JOIN table_saved s USING(ID) ``` Upvotes: 0
2018/03/17
333
1,374
<issue_start>username_0: I need to create a form in gravity forms. Client need to full in the form and should be able to add sections with title, body and image. I can create the form but can't get frontend add fieldgroup or section to work. Anyone ideas how to make fields repeatable on frontend?<issue_comment>username_1: Gravity Forms does not allow creation of fields from the front-end. All fields have to be created from the backend. If you want to have some sort of repeatable fields try to use the list field with multiple columns and then style them with your own css. If you want to change the input type of a particular column from text to dropdown you can use the `gform_column_input` filter which is described in further details [here](https://docs.gravityforms.com/gform_column_input/) Upvotes: 0 <issue_comment>username_2: We have a solution (Gravity Wiz team) called Nested Forms: <https://gravitywiz.com/documentation/gravity-forms-nested-forms/> We wrote a guest post on the Gravity Forms website about it here: <https://www.gravityforms.com/repeatable-data-wordpress-forms/> Upvotes: 2 [selected_answer]<issue_comment>username_3: I did export the form then duplicated them by copy paste in the xml file. Conditional logic changed numbers end now it works. This was the fastest option for me. Other solutions didn't have the right options. Upvotes: 0
2018/03/17
1,260
4,736
<issue_start>username_0: I want to optimize Java source code. I implemented this code for file download: ``` if (!file.exists()) { response.sendError(HttpServletResponse.SC_NOT_FOUND, "No file " + reportPath); return; } if (!file.canWrite()) { response.sendError(HttpServletResponse.SC_UNAUTHORIZED, "Can't write in file " + reportPath); return; } if (!file.canRead()) { response.sendError(HttpServletResponse.SC_UNAUTHORIZED, "Can't read file " + reportPath); return; } ``` How I can make it a lot more compact?<issue_comment>username_1: You can write a method like this: ``` boolean check(boolean condition, HttpServletResponse response, int code, String message) { if (!condition) { response.sendError(code, message); } return condition; } ``` And then invoke like: ``` if (!check(file.exists(), response, SC_NOT_FOUND, "No file " + reportPath) || !check(file.canWrite(), response, SC_NOT_AUTHORIZED, "Can't write " + reportPath) || Etc) { return; } ``` But I don't think this really helps readability. Upvotes: 0 <issue_comment>username_2: You can swap to an if-else, and then take advantage of the implicit returning at the end of a method scope. ``` public void doThing(Object response) { if (!file.exists()) { response.sendError(HttpServletResponse.SC_NOT_FOUND, "No file " + reportPath); } else if (!file.canWrite()) { response.sendError(HttpServletResponse.SC_UNAUTHORIZED, "Can't write in file " + reportPath); } else if (!file.canRead()) { response.sendError(HttpServletResponse.SC_UNAUTHORIZED, "Can't read file " + reportPath); } else { doStuff(); } // Implicit return } ``` Upvotes: 2 <issue_comment>username_3: I know i have increased the number of lines here, but i think this version can easily accommodate adding more conditions or deleting existing conditions in future, with code changes to be made at less places. ``` boolean isFileExist = file.exists(); boolean isFileRead = file.canWrite(); boolean isFileWrite = file.canRead(); if (!isFileExist) { setError(response, 0, "No file " + reportPath); } else if (!isFileRead) { setError(response, 1, "Can't read file " + reportPath); } else if (!isFileWrite) { setError(response, 1, "Can't write in file " + reportPath); } else { setError(response, 2, "Error file " + reportPath); } private void setError(HttpServletResponse response, int type, String message) { switch (type) { case 0: response.sendError(HttpServletResponse.SC_NOT_FOUND, message); break; case 1: response.sendError(HttpServletResponse.SC_UNAUTHORIZED, message); break; default: break; } } ``` Upvotes: 0 <issue_comment>username_4: **If you are using Java8:** The `if` conditions can be hidden with a Map defined as `Map, Consumer>` However, since `response.sendError` throws an IOException, we cannot use a [`Consumer`](https://docs.oracle.com/javase/8/docs/api/java/util/function/Consumer.html) here. So, I'm creating a functional interface similar to `Consumer` but that would throw an IOException. ``` public interface MyConsumer { void accept(T t) throws IOException; } ``` This map holds the mapping between the Predicate (condition) and the action. ``` Map, MyConsumer> conditionActionMap = new LinkedHashMap<>(); //Order matters here conditionActionMap.put(file -> !file.exists(), response -> response.sendError(HttpServletResponse.SC\_NOT\_FOUND, "No file " + reportPath)); conditionActionMap.put(file -> !file.canWrite(), response -> response.sendError(HttpServletResponse.SC\_UNAUTHORIZED, "Can't write in file " + reportPath)); conditionActionMap.put(file -> !file.canRead(), response -> response.sendError(HttpServletResponse.SC\_UNAUTHORIZED, "Can't read file " + reportPath)); Optional> optionalConsumer = conditionActionMap .entrySet() .stream() .filter(entry -> entry.getKey().apply(file)) .map(Map.Entry::getValue) .findFirst(); if (optionalConsumer.isPresent()) { optionalConsumer.get().accept(response); } else { //rest of method not shown in OP } ``` --- If the above `else` block also involves doing some operation with the HttpServletResponse (and your original method does not return anything), then you can create a `MyConsumer` for the success case. ``` MyConsumer successConsumer = response -> response.setStatus(200); //or whatever MyConsumer consumer = conditionActionMap.entrySet() .stream() .filter(entry -> entry.getKey().apply(file)) .map(Map.Entry::getValue) .findFirst() .orElse(successConsumer); consumer.accept(response); ``` Upvotes: 0
2018/03/17
1,076
2,360
<issue_start>username_0: Table Name: INCLog [Table](https://i.stack.imgur.com/uCx2j.jpg) ``` UID Incident Number Modified Date Modified By 1 INC000011193511 2/18/2018 12:04 zuck 1 INC000011193511 2/19/2018 1:42 shei 1 INC000011193511 3/14/2018 5:08 byrr 1 INC000011193511 3/17/2018 5:08 byrr 2 INC000011193513 1/23/2018 2:58 r070 2 INC000011193513 1/27/2018 9:27 r070 2 INC000011193513 2/21/2018 5:42 gont 2 INC000011193513 3/16/2018 6:06 r070 3 INC000011193514 2/1/2018 6:07 shei 3 INC000011193514 2/13/2018 6:07 r070 4 INC000011193515 2/20/2018 21:05 moha 4 INC000011193515 2/21/2018 8:05 moha 4 INC000011193515 3/15/2018 18:34 doss ``` How do I use Select statement to return incident numbers modified in last 5 days? For an instance; INC000011193511 was last modified on 3/17/2018, the query should return the entire log of the same ticket ``` 1 INC000011193511 2/18/2018 12:04 zuck 1 INC000011193511 2/19/2018 1:42 shei 1 INC000011193511 3/14/2018 5:08 byrr 1 INC000011193511 3/17/2018 5:08 byrr ``` Thanks in advance for the help!<issue_comment>username_1: Find last modified date for each incident with max window function. Then compare with current date. Your query should be something like: ``` select * from ( select *, mDate = max([modified date]) over (partition by [Incident Number]) from myTable ) t where mDate >= dateadd(dd, -5, getdate()) ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: If you want just last 5day transactions for specific transactions then you can use following query. ``` SELECT * FROM INCLog where IncedentNumber ='INC000011193511' AND ModifiedDate = DATEADD(DAY, - 5,GETDATE () ) ``` Upvotes: 0 <issue_comment>username_3: I would recommend doing this as: ``` SELECT Incident_Number, MAX(Modified_Date) maxDate FROM TempTabel GROUP BY Incident_Number HAVING MAX(Modified_Date)) > = dateadd(dd, -5, getdate()) ``` Thank You! Upvotes: 0 <issue_comment>username_4: If i understand correctly you could use `group by` clause to find the max date of each Incident number and compare it with last 5 days ``` SELECT t.* FROM table t CROSS APPLY ( SELECT Incident, MAX([Modified Date]) Mdate FROM table WHERE Incident = t.Incident GROUP BY Incident) c WHERE c.Mdate >= DATEADD(DD, -5 , getdate()) ``` Upvotes: 0
2018/03/17
175
613
<issue_start>username_0: In telegram bot based on `telegraf`, I want to wrap text into multiple lines. Now it shows only part of that. [![shows like this:](https://i.stack.imgur.com/qNZHy.png)](https://i.stack.imgur.com/qNZHy.png)<issue_comment>username_1: That's a client-side limit, unfortunately, you can't do this at this time :( You can make a suggestion to [Telegram Supporter](https://telegram.org/faq#telegram-support). Upvotes: 2 [selected_answer]<issue_comment>username_2: You can wrap the text into multiple lines using ``` import textwrap InlineKeyboardButton(textwrap.fill(answer) ``` Upvotes: 0
2018/03/17
1,151
4,635
<issue_start>username_0: [How to set maximum expanded height in android support design bottom sheet?](https://stackoverflow.com/questions/36636671/how-to-set-maximum-expanded-height-in-android-support-design-bottom-sheet/36636780?noredirect=1#comment85651951_36636780) The question is an extension to the above question, i want to set the max expanded height of the sheet but dynamically according to the screen size. I have tried setting new layout params to the view implementing bottomsheet behaviour but it does nothing good.<issue_comment>username_1: Finally found it, This question troubled me a lot with no solution reported anywhere, and the answer lies in the behavior itself. The minimum offset is the max value upto which the bottomsheet should move and we set the lower cap of the value to our desired height upto which we want the bottomsheet to move. You can expose a function to set the value or do it direclty in our behavior. To dynamically set the max expanded height for bottomsheet we need to increase the minimum offset value from 0 to our desired value in BottomSheetBehavior class, let me show the code. Happy coding!! ``` // The minimum offset value upto which your bottomsheet to move private int mMinOffset; /** * Called when the parent CoordinatorLayout is about the layout the given child view. */ @Override public boolean onLayoutChild(CoordinatorLayout parent, V child, int layoutDirection) { int dynamicHeight = Utils.dpToPx(parent.getContext(), **your_value_in_dp**); mMinOffset = Math.max(dynamicHeight, mParentHeight - child.getHeight()); mMaxOffset = Math.max(mParentHeight - mPeekHeight, mMinOffset); mAnchorOffset = Math.min(mParentHeight - mAnchorHeight, mMaxOffset); if (mState == STATE_EXPANDED) { ViewCompat.offsetTopAndBottom(child, mMinOffset); anchorViews(mMinOffset); } } ``` Upvotes: -1 [selected_answer]<issue_comment>username_2: The simplest solution is to set the maxHeight property of the bottom sheet like this. ``` DisplayMetrics displayMetrics = new DisplayMetrics(); activity.getWindowManager().getDefaultDisplay().getMetrics(displayMetrics); bottomSheet.setMaxHeight((int) (displayMetrics.heightPixels * 0.65)); ``` Upvotes: 0 <issue_comment>username_3: 2021 I'm late but someone will need Kotlin extenxion: ``` fun View.setupFullHeight(maxHeight: Double = 0.3) { val displayMetrics = context?.resources?.displayMetrics val height = displayMetrics?.heightPixels val maximalHeight = (height?.times(maxHeight))?.toInt() val layoutParams = this.layoutParams maximalHeight?.let { layoutParams.height = it } this.layoutParams = layoutParams ``` } How to use: ``` override fun onCreateDialog(savedInstanceState: Bundle?): Dialog { return object : BottomSheetDialog(requireContext(), R.style.DialogRoundedCornerStyle) { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) dialog?.setOnShowListener { val bottomSheetDialog = it as BottomSheetDialog val parentLayout = bottomSheetDialog.findViewById(R.id.design\_bottom\_sheet) parentLayout?.let { view -> val behavior = BottomSheetBehavior.from(view) view.setupFullHeight() behavior.apply { state = BottomSheetBehavior.STATE\_EXPANDED isDraggable = false isCancelable = false } } } } override fun onBackPressed() { super.onBackPressed() dialog?.dismiss() } } } ``` Upvotes: 1 <issue_comment>username_4: Please use this and chill :) 1. const val BOTTOMSHEET\_HEIGHT\_TO\_SCREEN\_HEIGHT\_RATIO = 0.80 //change according to your requirement 2. override onCreateDialog() in your bottomsheetFragment --- ``` override fun onCreateDialog(savedInstanceState: Bundle?): Dialog { val dialog = super.onCreateDialog(savedInstanceState) as BottomSheetDialog dialog.setOnShowListener { dialog.findViewById(com.google.android.material.R.id.design\_bottom\_sheet) ?.apply { val maxDesiredHeight = (resources.displayMetrics.heightPixels \* BOTTOMSHEET\_HEIGHT\_TO\_SCREEN\_HEIGHT\_RATIO).toInt() if (this.height > maxDesiredHeight) { val bottomSheetLayoutParams = this.layoutParams bottomSheetLayoutParams.height = maxDesiredHeight this.layoutParams = bottomSheetLayoutParams } BottomSheetBehavior.from(this)?.apply { this.state = BottomSheetBehavior.STATE\_EXPANDED this.skipCollapsed = true } } } return dialog } ``` Upvotes: 2
2018/03/17
1,543
5,461
<issue_start>username_0: I'm creating a table using html and add some designs using css I already created a table and put background color horizontally/ by row like this but i want to make it vertical [![enter image description here](https://i.stack.imgur.com/OOI2y.png)](https://i.stack.imgur.com/OOI2y.png) and here's my html code ```css table { width: 100%; border-collapse: collapse; border-spacing: 0; margin-bottom: 20px; } table tr:nth-child(2n-1) td { background: #F5F5F5; } table th, table td { text-align: center; } table th { padding: 5px 20px; color: #5D6975; border-bottom: 1px solid #C1CED9; white-space: nowrap; font-weight: normal; } table .service, table .desc { text-align: left; } table td { padding: 20px; text-align: right; } table td.service, table td.desc { vertical-align: top; } table td.unit, table td.qty, table td.total { font-size: 1.2em; } table td.grand { border-top: 1px solid #5D6975;; } ``` ```html | | | | 房间号 ( UNIT NO. ) | 费用 ( RATE ) | | | AED: /- | | | | | 5% VAT(增值税) | AED: /- | | | | | TOTAL (共计): | AED: /- | ``` I use this code to make background horizontal/by row ``` table tr:nth-child(2n-1) td { background: #F5F5F5; } ``` The question is how can i put the background vertically or by columns? Thanks!<issue_comment>username_1: Finally found it, This question troubled me a lot with no solution reported anywhere, and the answer lies in the behavior itself. The minimum offset is the max value upto which the bottomsheet should move and we set the lower cap of the value to our desired height upto which we want the bottomsheet to move. You can expose a function to set the value or do it direclty in our behavior. To dynamically set the max expanded height for bottomsheet we need to increase the minimum offset value from 0 to our desired value in BottomSheetBehavior class, let me show the code. Happy coding!! ``` // The minimum offset value upto which your bottomsheet to move private int mMinOffset; /** * Called when the parent CoordinatorLayout is about the layout the given child view. */ @Override public boolean onLayoutChild(CoordinatorLayout parent, V child, int layoutDirection) { int dynamicHeight = Utils.dpToPx(parent.getContext(), **your_value_in_dp**); mMinOffset = Math.max(dynamicHeight, mParentHeight - child.getHeight()); mMaxOffset = Math.max(mParentHeight - mPeekHeight, mMinOffset); mAnchorOffset = Math.min(mParentHeight - mAnchorHeight, mMaxOffset); if (mState == STATE_EXPANDED) { ViewCompat.offsetTopAndBottom(child, mMinOffset); anchorViews(mMinOffset); } } ``` Upvotes: -1 [selected_answer]<issue_comment>username_2: The simplest solution is to set the maxHeight property of the bottom sheet like this. ``` DisplayMetrics displayMetrics = new DisplayMetrics(); activity.getWindowManager().getDefaultDisplay().getMetrics(displayMetrics); bottomSheet.setMaxHeight((int) (displayMetrics.heightPixels * 0.65)); ``` Upvotes: 0 <issue_comment>username_3: 2021 I'm late but someone will need Kotlin extenxion: ``` fun View.setupFullHeight(maxHeight: Double = 0.3) { val displayMetrics = context?.resources?.displayMetrics val height = displayMetrics?.heightPixels val maximalHeight = (height?.times(maxHeight))?.toInt() val layoutParams = this.layoutParams maximalHeight?.let { layoutParams.height = it } this.layoutParams = layoutParams ``` } How to use: ``` override fun onCreateDialog(savedInstanceState: Bundle?): Dialog { return object : BottomSheetDialog(requireContext(), R.style.DialogRoundedCornerStyle) { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) dialog?.setOnShowListener { val bottomSheetDialog = it as BottomSheetDialog val parentLayout = bottomSheetDialog.findViewById(R.id.design\_bottom\_sheet) parentLayout?.let { view -> val behavior = BottomSheetBehavior.from(view) view.setupFullHeight() behavior.apply { state = BottomSheetBehavior.STATE\_EXPANDED isDraggable = false isCancelable = false } } } } override fun onBackPressed() { super.onBackPressed() dialog?.dismiss() } } } ``` Upvotes: 1 <issue_comment>username_4: Please use this and chill :) 1. const val BOTTOMSHEET\_HEIGHT\_TO\_SCREEN\_HEIGHT\_RATIO = 0.80 //change according to your requirement 2. override onCreateDialog() in your bottomsheetFragment --- ``` override fun onCreateDialog(savedInstanceState: Bundle?): Dialog { val dialog = super.onCreateDialog(savedInstanceState) as BottomSheetDialog dialog.setOnShowListener { dialog.findViewById(com.google.android.material.R.id.design\_bottom\_sheet) ?.apply { val maxDesiredHeight = (resources.displayMetrics.heightPixels \* BOTTOMSHEET\_HEIGHT\_TO\_SCREEN\_HEIGHT\_RATIO).toInt() if (this.height > maxDesiredHeight) { val bottomSheetLayoutParams = this.layoutParams bottomSheetLayoutParams.height = maxDesiredHeight this.layoutParams = bottomSheetLayoutParams } BottomSheetBehavior.from(this)?.apply { this.state = BottomSheetBehavior.STATE\_EXPANDED this.skipCollapsed = true } } } return dialog } ``` Upvotes: 2
2018/03/17
773
3,187
<issue_start>username_0: I want to set a *session attribute* with the name that is send by user. User will first login in. And when he logged in I want that his username will set as session attribute. What should I do? This is my controller: ``` @GetMapping("/login") public String login() { return "Login"; } @PostMapping("/loginCheck") public String checkLogin(@ModelAttribute("users") Users user) { if (userService.checkUser(user)) { return "redirect:/"+user.getUsername()+"/"; } else { return "Login"; } } @PostMapping("/signup") public ModelAndView createuser(@ModelAttribute("users") Users user) { if (userService.checkUser(user)) { return new ModelAndView("Login"); } else { userService.adduser(user); return new ModelAndView("Login"); } } ``` Now how I set the username as session which I am getting in `user.getUsername()`?<issue_comment>username_1: In SpringMVC you can have the HttpSession injected automatically by adding it as a parameter to your method. So, you login could be something similar to: ``` @GetMapping("/login") public String login(@ModelAttribute("users") Users user, HttpSession session) { if(userService.authUser(user)) { //Made this method up session.setAttribute("username", user.getUsername()); view.setViewName("homepage"); //Made up view } else{ return new ModelAndView("Login"); } } ``` Upvotes: 3 <issue_comment>username_2: ``` @Autowired ObjectFactory httpSessionFactory; . . . HttpSession session = httpSessionFactory.getObject(); ``` [Works good. Thanks to this post.](https://stackoverflow.com/a/48489304/1690081) Upvotes: 0 <issue_comment>username_3: If you use Spring Security, registered a bean listening for Spring Security's `InteractiveAuthenticationSuccessEvent` and `SessionDestroyedEvent` events. These events fire without any explicit configuration in a default Spring Boot environment. See <https://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#web.security>: > > The basic features you get by default in a web application are: > > > * . . . > * A DefaultAuthenticationEventPublisher for publishing authentication events. > > > By handling these events you can add "username" as a session attribute immediately after a user logons and remove that attribute when the security session (security context) is destroyed: ``` @Component public class SessionStoreUsernameAuthEventHandler { @EventListener public void audit(InteractiveAuthenticationSuccessEvent e) { getSession().ifPresent(s -> s.setAttribute("username", e.getAuthentication().getName())); } @EventListener public void audit(SessionDestroyedEvent e) { getSession().ifPresent(s -> s.removeAttribute("username")); } private static Optional getCurrentRequest() { return Optional.ofNullable(RequestContextHolder.getRequestAttributes()) .filter(ServletRequestAttributes.class::isInstance) .map(ServletRequestAttributes.class::cast) .map(ServletRequestAttributes::getRequest); } private static Optional getSession() { return getCurrentRequest().map(HttpServletRequest::getSession); } } ``` Upvotes: 1
2018/03/17
564
1,874
<issue_start>username_0: I have to add the css file externally. Tried with the code import "./Login.css"; which is located in the base path. Cant able to get the file, which turns the error like below. **You may need an appropriate loader to handle this file type.** ``` .Login { padding: 60px 0; } ``` I updated in webpack config also. Webpack config: ``` var config = { entry: './main.js', output: { path:'/', filename: 'index.js', }, devServer: { inline: true, port: 8080 }, module: { loaders: [ { test: /\.css$/, include: /node_modules/, loaders: ['style-loader', 'css-loader'], }, { test: /\.jsx?$/, exclude: /node_modules/, loader: 'babel-loader', query: { presets: ['es2015', 'react'] } } ] } } module.exports = config; ``` In JSX File: ``` import React from 'react'; import { Button, FormGroup, FormControl, ControlLabel } from "react-bootstrap"; import "./Login.css"; ``` **Package.json,** ``` { "name": "reactapp", "version": "1.0.0", "description": "Tetser", "main": "index.js", "scripts": { "start": "webpack-dev-server --hot" }, "author": "", "license": "ISC", "dependencies": { "react": "^15.6.1", "react-bootstrap": "^0.32.1", "react-dom": "^15.6.1" } } ``` I tried atmost everything, but no solution. Anyone clarify, please.<issue_comment>username_1: You will need to add css-loader and style-loader to your dev dependencies in package.json Link to webpack docs: <https://webpack.js.org/concepts/loaders/#using-loaders> Upvotes: 2 <issue_comment>username_2: The way I do it (ex: import fonts from fonts.com) : * create a css file, * import external css in local css file * import local css file in js Upvotes: 1
2018/03/17
336
999
<issue_start>username_0: I’m wondering how I can get the 4 digit user ID of a discord user from a message. For example, PruinaTempestatis#8487. Can someone help me?<issue_comment>username_1: If the number always appears after a single hash `#` then you can simply do a split on that string and get the second value of the array like this: **USING SPLIT** ```js var str = 'PruinaTempestatis#8487'; var digits = str.split('#')[1]; console.log(digits); ``` **USING REGEX** ```js var str = 'PruinaTempestatis#8487'; var digits = str.replace( /^\D+/g, ''); console.log(digits); ``` **USING SUBSTRING** ```js var str = 'PruinaTempestatis#8487'; var digits = str.substr(str.indexOf('#')+1, str.length); console.log(digits); ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: Using user.tag ``` message.channel.send(message.user.tag) ``` Upvotes: 1 <issue_comment>username_3: Split their tag into an array and get the last element. ``` member.user.tag.split('#')[-1]; ``` Upvotes: 0
2018/03/17
398
1,282
<issue_start>username_0: Upon my research, I realised Google's Activity Recognition API allows you to detect the activity a phone user is undertaking (walking, driving, etc), but I want to detect a change in transition. I want to know if this transition occurs: (walking to driving) or (driving to walking). As soon as a user enters a geofence, I want to listen for this change (driving to walking) or (walking to driving). How do I achieve this?<issue_comment>username_1: If the number always appears after a single hash `#` then you can simply do a split on that string and get the second value of the array like this: **USING SPLIT** ```js var str = 'PruinaTempestatis#8487'; var digits = str.split('#')[1]; console.log(digits); ``` **USING REGEX** ```js var str = 'PruinaTempestatis#8487'; var digits = str.replace( /^\D+/g, ''); console.log(digits); ``` **USING SUBSTRING** ```js var str = 'PruinaTempestatis#8487'; var digits = str.substr(str.indexOf('#')+1, str.length); console.log(digits); ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: Using user.tag ``` message.channel.send(message.user.tag) ``` Upvotes: 1 <issue_comment>username_3: Split their tag into an array and get the last element. ``` member.user.tag.split('#')[-1]; ``` Upvotes: 0
2018/03/17
1,108
2,582
<issue_start>username_0: I have some database like example below. ``` WITH TB AS( SELECT 'A' C1, 'B' AS C2, 1 AS N1, 1 AS N2 FROM DUAL UNION ALL SELECT 'A' C1, 'B' AS C2, 2 AS N1, 2 AS N2 FROM DUAL UNION ALL SELECT 'A1' C1, 'B1' AS C2, 1 AS N1, 3 AS N2 FROM DUAL UNION ALL SELECT 'A1' C1, 'B1' AS C2, 2 AS N1, 4 AS N2 FROM DUAL UNION ALL SELECT 'A1' C1, 'B1' AS C2, 3 AS N1, 1 AS N2 FROM DUAL UNION ALL SELECT 'A2' C1, 'B2' AS C2, 1 AS N1, 6 AS N2 FROM DUAL ) SELECT * FROM TB ``` How can I list all row with Max(N1), and group by C1, C2 like Image below? [![enter image description here](https://i.stack.imgur.com/raVRu.png)](https://i.stack.imgur.com/raVRu.png)<issue_comment>username_1: The description of the requirements in the question is unclear, I am guesing that max(n1) should be calculated **for each group of C1+c2**. If this is a case, then you can use `MAX() OVER ()` analytic function in this way: ``` SELECT tb.*, max( n1 ) over (partition by c1, c2 ) xxxx FROM TB; | C1 | C2 | N1 | N2 | XXXX | |----|----|----|----|------| | A | B | 1 | 1 | 2 | | A | B | 2 | 2 | 2 | | A1 | B1 | 1 | 3 | 3 | | A1 | B1 | 2 | 4 | 3 | | A1 | B1 | 3 | 1 | 3 | | A2 | B2 | 1 | 6 | 1 | ``` and then wrap the above query as a subquery, and filter out unwanted rows: ``` SELECT c1,c2,n1,n2 FROM ( SELECT tb.*, max( n1 ) over (partition by c1, c2 ) xxxx FROM TB ) WHERE n1 = xxxx | C1 | C2 | N1 | N2 | |----|----|----|----| | A | B | 2 | 2 | | A1 | B1 | 3 | 1 | | A2 | B2 | 1 | 6 | ``` --- Demo: <http://sqlfiddle.com/#!4/d2fb9/4> Upvotes: 3 [selected_answer]<issue_comment>username_2: One way is using `KEEP .. DENSE_RANK` [SQL Fiddle](http://sqlfiddle.com/#!4/97eade/500) **Query 1**: ``` WITH TB AS( SELECT 'A' C1, 'B' AS C2, 1 AS N1, 1 AS N2 FROM DUAL UNION ALL SELECT 'A' C1, 'B' AS C2, 2 AS N1, 2 AS N2 FROM DUAL UNION ALL SELECT 'A1' C1, 'B1' AS C2, 1 AS N1, 3 AS N2 FROM DUAL UNION ALL SELECT 'A1' C1, 'B1' AS C2, 2 AS N1, 4 AS N2 FROM DUAL UNION ALL SELECT 'A1' C1, 'B1' AS C2, 3 AS N1, 1 AS N2 FROM DUAL UNION ALL SELECT 'A2' C1, 'B2' AS C2, 1 AS N1, 6 AS N2 FROM DUAL ) SELECT C1 ,C2 ,MAX(N1) AS N1 ,MAX(N2) KEEP ( DENSE_RANK FIRST ORDER BY N1 DESC ) AS N2 FROM TB GROUP BY C1 ,C2 ``` **[Results](http://sqlfiddle.com/#!4/97eade/500/0)**: ``` | C1 | C2 | N1 | N2 | |----|----|----|----| | A | B | 2 | 2 | | A1 | B1 | 3 | 1 | | A2 | B2 | 1 | 6 | ``` Upvotes: 2
2018/03/17
5,003
18,350
<issue_start>username_0: I found this challenge problem which states the following : > > Suppose that there are n rectangles on the XY plane. Write a program to calculate the maximum possible number of rectangles that can be crossed with a single straight line drawn on this plane. > > > ![see image for an example](https://i.stack.imgur.com/ulPm8.png) I have been brainstorming for quite a time but couldn't find any solution. Maybe at some stage, we use dynamic programming steps but couldn't figure out how to start.<issue_comment>username_1: (Edit of my earlier answer that considered rotating the plane.) Here's sketch of the `O(n^2)` algorithm, which combines username_5's idea with <NAME>'s reference to dual line arrangements as sorted angular sequences. We start out with a doubly connected edge list or similar structure, allowing us to split an edge in `O(1)` time, and a method to traverse the faces we create as we populate a 2-dimensional plane. For simplicity, let's use just three of the twelve corners on the rectangles below: ``` 9| (5,9)___(7,9) 8| | | 7| (4,6)| | 6| ___C | | 5| | | | | 4| |___| | | 3| ___ |___|(7,3) 2| | | B (5,3) 1|A|___|(1,1) |_ _ _ _ _ _ _ _ 1 2 3 4 5 6 7 ``` We insert the three points (corners) in the dual plane according to the following transformation: ``` point p => line p* as a*p_x - p_y line l as ax + b => point l* as (a, -b) ``` Let's enter the points in order `A, B, C`. We first enter `A => y = x - 1`. Since there is only one edge so far, we insert `B => y = 5x - 3`, which creates the vertex, `(1/2, -1/2)` and splits our edge. (One elegant aspect of this solution is that each vertex (point) in the dual plane is actually the dual point of the line passing through the rectangles' corners. Observe `1 = 1/2*1 + 1/2` and `3 = 1/2*5 + 1/2`, points `(1,1)` and `(5,3)`.) Entering the last point, `C => y = 4x - 6`, we now look for the leftmost face (could be an incomplete face) where it will intersect. This search is `O(n)` time since we have to try each face. We find and create the vertex `(-3, -18)`, splitting the lower edge of `5x - 3` and traverse up the edges to split the right half of `x - 1` at vertex `(5/3, 2/3)`. Each insertion has `O(n)` time since we must first find the leftmost face, then traverse each face to split edges and mark the vertices (intersection points for the line). In the dual plane we now have: [![enter image description here](https://i.stack.imgur.com/fttdj.jpg)](https://i.stack.imgur.com/fttdj.jpg) After constructing the line arrangement, we begin our iteration on our three example points (rectangle corners). Part of the magic in reconstructing a sorted angular sequence in relation to one point is partitioning the angles (each corresponding with an ordered line intersection in the dual plane) into those corresponding with a point on the right (with a greater x-coordinate) and those on the left and concatenating the two sequences to get an ordered sequence from -90 deg to -270 degrees. (The points on the right transform to lines with positive slopes in relation to the fixed point; the ones on left, with negative slopes. Rotate your sevice/screen clockwise until the line for `(C*) 4x - 6` becomes horizontal and you'll see that `B*` now has a positive slope and `A*` negative.) Why does it work? If a point `p` in the original plane is transformed into a line `p*` in the dual plane, then traversing that dual line from left to right corresponds with rotating a line around `p` in the original plane that also passes through `p`. The dual line marks all the slopes of this rotating line by the x-coordinate from negative infinity (vertical) to zero (horizontal) to infinity (vertical again). (Let's summarize the rectangle-count-logic, updating the count\_array for the current rectangle while iterating through the angular sequence: if it's 1, increment the current intersection count; if it's 4 and the line is not directly on a corner, set it to 0 and decrement the current intersection count.) ``` Pick A, lookup A* => x - 1. Obtain the concatenated sequence by traversing the edges in O(n) => [(B*) 5x - 3, (C*) 4x - 6] ++ [No points left of A] Initialise an empty counter array, count_array of length n-1 Initialise a pointer, ptr, to track rectangle corners passed in the opposite direction of the current vector. Iterate: vertex (1/2, -1/2) => line y = 1/2x + 1/2 (AB) perform rectangle-count-logic if the slope is positive (1/2 is positive): while the point at ptr is higher than the line: perform rectangle-count-logic else if the slope is negative: while the point at ptr is lower than the line: perform rectangle-count-logic => ptr passes through the rest of the points up to the corner across from C, so intersection count is unchanged vertex (5/3, 2/3) => line y = 5/3x - 2/3 (AC) ``` We can see that `(5,9)` is above the line through `AC (y = 5/3x - 2/3)`, which means at this point we would have counted the intersection with the rightmost rectangle and not yet reset the count for it, totaling 3 rectangles for this line. We can also see in the graph of the dual plane, the other angular sequences: ``` for point B => B* => 5x - 3: [No points right of B] ++ [(C*) 4x - 6, (A*) x - 1] for point C => C* => 4x - 6: [(B*) 5x - 3] ++ [(A*) x - 1] (note that we start at -90 deg up to -270 deg) ``` Upvotes: 2 <issue_comment>username_2: How about the following algorithm: ``` RES = 0 // maximum number of intersections CORNERS[] // all rectangles corners listed as (x, y) points for A in CORNERS for B in CORNERS // optimization: starting from corner next to A RES = max(RES, CountIntersectionsWithLine(A.x, A.y, B.x, B.y)) return RES ``` In other words, start drawing lines from each rectangle corner to each other rectangle corner and find the maximum number of intersections. As suggested by @weston, we can avoid calculating same line twice by starting inner loop from the corner next to `A`. Upvotes: 2 <issue_comment>username_3: Solution -------- In the space of all lines in the graph, the lines which pass by a corner are exactly the ones where the number or intersections is about to decrease. In other words, they each form a local maximum. And for every line which passes by at least one corner, there exist an associated line that passes by two corners that has the same number of intersections. The conclusion is that we only need to check the lines formed by two rectangle corners as they form a set that fully represents the local maxima of our problem. From those we pick the one which has the most intersections. Time complexity =============== This solution first needs to recovers all lines that pass by two corners. The number of such line is *O(n^2)*. We then need to count the number of intersections between a given line and a rectangle. This can obviously be done in *O(n)* by comparing to each rectangles. There might be a more efficient way to proceed, but we know that this algorithm is then at most *O(n^3)*. Python3 implementation ====================== Here is a Python implementation of this algorithm. I oriented it more toward readability than efficiency, but it does exactly what the above defines. ``` def get_best_line(rectangles): """ Given a set of rectangles, return a line which intersects the most rectangles. """ # Recover all corners from all rectangles corners = set() for rectangle in rectangles: corners |= set(rectangle.corners) corners = list(corners) # Recover all lines passing by two corners lines = get_all_lines(corners) # Return the one which has the highest number of intersections with rectangles return max( ((line, count_intersections(rectangles, line)) for line in lines), key=lambda x: x[1]) ``` This implementation uses the following helpers. ``` def get_all_lines(points): """ Return a generator providing all lines generated by a combination of two points out of 'points' """ for i in range(len(points)): for j in range(i, len(points)): yield Line(points[i], points[j]) def count_intersections(rectangles, line): """ Return the number of intersections with rectangles """ count = 0 for rectangle in rectangles: if line in rectangle: count += 1 return count ``` And here are the class definition that serve as data structure for rectangles and lines. ``` import itertools from decimal import Decimal class Rectangle: def __init__(self, x_range, y_range): """ a rectangle is defined as a range in x and a range in y. By example, the rectangle (0, 0), (0, 1), (1, 0), (1, 1) is given by Rectangle((0, 1), (0, 1)) """ self.x_range = sorted(x_range) self.y_range = sorted(y_range) def __contains__(self, line): """ Return whether 'line' intersects the rectangle. To do so we check if the line intersects one of the diagonals of the rectangle """ c1, c2, c3, c4 = self.corners x1 = line.intersect(Line(c1, c4)) x2 = line.intersect(Line(c2, c3)) if x1 is True or x2 is True \ or x1 is not None and self.x_range[0] <= x1 <= self.x_range[1] \ or x2 is not None and self.x_range[0] <= x2 <= self.x_range[1]: return True else: return False @property def corners(self): """Return the corners of the rectangle sorted in dictionary order""" return sorted(itertools.product(self.x_range, self.y_range)) class Line: def __init__(self, point1, point2): """A line is defined by two points in the graph""" x1, y1 = Decimal(point1[0]), Decimal(point1[1]) x2, y2 = Decimal(point2[0]), Decimal(point2[1]) self.point1 = (x1, y1) self.point2 = (x2, y2) def __str__(self): """Allows to print the equation of the line""" if self.slope == float('inf'): return "y = {}".format(self.point1[0]) else: return "y = {} * x + {}".format(round(self.slope, 2), round(self.origin, 2)) @property def slope(self): """Return the slope of the line, returning inf if it is a vertical line""" x1, y1, x2, y2 = *self.point1, *self.point2 return (y2 - y1) / (x2 - x1) if x1 != x2 else float('inf') @property def origin(self): """Return the origin of the line, returning None if it is a vertical line""" x, y = self.point1 return y - x * self.slope if self.slope != float('inf') else None def intersect(self, other): """ Checks if two lines intersect. Case where they intersect: return the x coordinate of the intersection Case where they do not intersect: return None Case where they are superposed: return True """ if self.slope == other.slope: if self.origin != other.origin: return None else: return True elif self.slope == float('inf'): return self.point1[0] elif other.slope == float('inf'): return other.point1[0] elif self.slope == 0: return other.slope * self.origin + other.origin elif other.slope == 0: return self.slope * other.origin + self.origin else: return (other.origin - self.origin) / (self.slope - other.slope) ``` Example ------- Here is a working example of the above code. ``` rectangles = [ Rectangle([0.5, 1], [0, 1]), Rectangle([0, 1], [1, 2]), Rectangle([0, 1], [2, 3]), Rectangle([2, 4], [2, 3]), ] # Which represents the following rectangles (not quite to scale) # # * # * # # ** ** # ** ** # # ** # ** ``` We can clearly see that an optimal solution should find a line that passes by three rectangles and that is indeed what it outputs. ``` print('{} with {} intersections'.format(*get_best_line(rectangles))) # prints: y = 0.50 * x + -5.00 with 3 intersections ``` Upvotes: 2 <issue_comment>username_4: If you consider a rotating line at angle Θ and if you project all rectangles onto this line, you obtain N line segments. The maximum number of rectangles crossed by a perpendicular to this line is easily obtained by sorting the endpoints by increasing abscissa and keeping a count of the intervals met from left to right (keep a trace of whether an endpoint is a start or an end). This is shown in green. Now two rectangles are intersected by all the lines at an angle comprised between the two internal tangents [example in red], so that all "event" angles to be considered (i.e. all angles for which a change of count can be observed) are these N(N-1) angles. Then the brute force resolution scheme is * for all limit angles (O(N²) of them), + project the rectangles on the rotating line (O(N) operations), + count the overlaps and keep the largest (O(N Log N) to sort, then O(N) to count). This takes in total O(N³Log N) operations. [![enter image description here](https://i.stack.imgur.com/PxfGO.png)](https://i.stack.imgur.com/PxfGO.png) Assuming that the sorts needn't be re-done in full for every angle if we can do them incrementally, we can hope for a complexity lowered to O(N³). This needs to be checked. --- **Note:** The solutions that restrict the lines to pass through the corner of one rectangle are wrong. If you draw wedges from the four corners of a rectangle to the whole extent of another, there will remain empty space in which can lie a whole rectangle that won't be touched, even though there exists a line through the three of them. [![enter image description here](https://i.stack.imgur.com/VbFot.png)](https://i.stack.imgur.com/VbFot.png) Upvotes: 2 <issue_comment>username_1: We can have an `O(n^2 (log n + m))` dynamic-programming method by adapting username_2's idea of iterating over the corners slightly to insert the relationship of the current corner vis a vis all the other rectangles into an interval tree for each of our `4n` iteration cycles. A new tree will be created for the corner we are trying. For each rectangle's four corners we'll iterate over each of the other rectangles. What we'll insert will be the angles marking the arc the paired-rectangle's farthest corners create in relation to the current fixed corner. In the example directly below, for the fixed lower rectangle's corner `R` when inserting the record for the middle rectangle, we would insert the angles marking the arc from `p2` to `p1` in relation to `R` (about `(37 deg, 58 deg)`). Then when we check the high rectangle in relation to `R`, we'll insert the interval of angles marking the arc from `p4` to `p3` in relation to `R` (about `(50 deg, 62 deg)`). When we insert the next arc record, we'll check it against all intersecting intervals and keep a record of the most intersections. [![enter image description here](https://i.stack.imgur.com/mmmJm.png)](https://i.stack.imgur.com/mmmJm.png) (Note that because any arc on a 360 degree circle for our purpose has a counterpart rotated 180 degrees, we may need to make an arbitrary cutoff (any alternative insights would be welcome). For example, this means that an arc from 45 degrees to 315 degrees would split into two: [0, 45] and [135, 180]. Any non-split arc could only intersect with one or the other but either way, we may need an extra hash to make sure rectangles are not double-counted.) Upvotes: 1 <issue_comment>username_5: Here is a sketch of an O(n^2 log n) solution. First, the preliminaries shared with other answers. When we have a line passing through some rectangles, we can translate it to any of the two sides until it passes through a corner of some rectangle. After that, we fix that corner as the center of rotation and rotate the line to any of the two sides until it passes through another corner. During the whole process, all points of intersection between our line and rectangle sides stayed on these sides, so the number of intersections stayed the same, as did the number of rectangles crossed by the line. As a result, we can consider only lines which pass through two rectangle corners, which is capped by O(n^2), and is a welcome improvement compared to the infinite space of arbitrary lines. So, how do we efficiently check all these lines? First, let us have an outer loop which fixes one point A and then considers all lines passing through A. There are O(n) choices of A. Now, we have one point A fixed, and want to consider all lines AB passing through all other corners B. In order to do that, first sort all other corners B according to the polar angle of AB, or, in other words, angle between axis Ox and vector AB. Angles are measured from -PI to +PI or from 0 to 2 PI or otherwise, the point in which we cut the circle to sort angles can be arbitrary. The sorting is done in O(n log n). Now, we have points B1, B2, ..., Bk sorted by the polar angle around point A (their number k is something like 4n-4, all corners of all rectangles except the one where point A is a corner). First, look at the line AB1 and count the number of rectangles crossed by that line in O(n). After that, consider rotating AB1 to AB2, then AB2 to AB3, all the way to ABk. The events which happen during the rotation are as follows: * When we rotate to ABi, and Bi is the first corner of some rectangle in our order, the number of rectangles crossed increases by 1 as soon as the rotating line hits Bi. * When we rotate to ABj, and Bj is the last corner of some rectangle in our order, the number of rectangles crossed decreases by 1 as soon as the line rotates past Bj. Which corners are first and last can be established with some O(n) preprocessing, after the sort, but before considering the ordered events. In short, we can rotate to the next such event and update the number of rectangles crossed in O(1). And there are k = O(n) events in total. What's left to do is to track the global maximum of this quantity throughout the whole algorithm. The answer is just this maximum. The whole algorithm runs in O(n \* (n log n + n + n)), which is O(n^2 log n), just as advertised. Upvotes: 3
2018/03/17
691
2,159
<issue_start>username_0: I used to use [LaTeX-Box plugin](https://github.com/latex-box-team/latex-box) to compile and view LaTeX documents with [these mappings](https://stackoverflow.com/a/48622447/7952358). But I don't use it anymore because `\begin` and `\end` highlighting (and to a lower degree parentheses matching), which can't be turned off, make Vim a lot slower in big `.tex` files. I know how to create an alternative mappings for compiling `.tex` documents: ``` autocmd FileType tex nnoremap :!latexmk -xelatex % autocmd FileType tex inoremap :!latexmk -xelatex %a ``` But this will create output files in directory I am currently in, not in the directory where `.tex` file is (for example, if I am in `~/Desktop` and I open in Vim a file which is in `~/Documents`, if I press `F2`, `.pdf` and all other files are created in `~/Desktop` instead of in `~/Documents`). I don't know how to create mappings for viewing the compiled `.pdf` document. So, I am asking you to help me create mappings that will compile `.tex` file, and view created `.pdf` file.<issue_comment>username_1: You can easily modify your mappings to generate files in the same directory as your source file with `:help filename-modifiers`: ``` augroup LaTeX autocmd! autocmd FileType tex nnoremap :!cd %:p:h && latexmk -xelatex % autocmd FileType tex inoremap :!cd %:p:h && latexmk -xelatex %a augroup END ``` To open the generated file with an imaginary `pdfviewer` program (command shortened for legibility): ``` !cd %:p:h && latexmk -xelatex % && pdfviewer %:p:r.pdf ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: ``` augroup LaTeX autocmd! autocmd FileType tex nnoremap :!cd '%:p:h' && latexmk -xelatex '%:t' autocmd FileType tex inoremap :!cd '%:p:h' && latexmk -xelatex '%:t'a autocmd FileType tex nnoremap :silent !mupdf '%:p:r.pdf'& autocmd FileType tex inoremap :silent !mupdf '%:p:r.pdf'&a augroup END ``` This is what worked for me. Thanks to [username_1](https://stackoverflow.com/users/546861/username_1) for [helping me](https://stackoverflow.com/a/49335402/7952358) to figure it out. Upvotes: 0
2018/03/17
1,943
6,166
<issue_start>username_0: I need help in creating chat rooms. It works like I'm entering into a specific room (like "Python Community", "DjangoDev", etc.). Packages that I use: * Django==1.9.7 * channels==1.1.8 * asgi-redis==1.4.3 I use the slug fields of the group name. I filter this slug in Channels find a group and call `save` every time a new message comes. But Channels throws an error which says that the name of group is invalid: ```py (venv) alibek@OverlorD:~/Desktop/my_porject/RedProject$ ./manage.py runworker 2018-03-17 13:59:51,607 - INFO - runworker - Using single-threaded worker. 2018-03-17 13:59:51,608 - INFO - runworker - Running worker against channel layer default (asgi_redis.core.RedisChannelLayer) 2018-03-17 13:59:51,608 - INFO - worker - Listening on channels chat-messages, http.request, websocket.connect, websocket.disconnect, websocket.receive Not Found: /home/ Not Found: /favicon.ico Traceback (most recent call last): File "./manage.py", line 14, in execute\_from\_command\_line(sys.argv) File "/home/alibek/Desktop/my\_porject/venv/local/lib/python2.7/site-packages/django/core/management/\_\_init\_\_.py", line 353, in execute\_from\_command\_line utility.execute() File "/home/alibek/Desktop/my\_porject/venv/local/lib/python2.7/site-packages/django/core/management/\_\_init\_\_.py", line 345, in execute self.fetch\_command(subcommand).run\_from\_argv(self.argv) File "/home/alibek/Desktop/my\_porject/venv/local/lib/python2.7/site-packages/django/core/management/base.py", line 348, in run\_from\_argv self.execute(\*args, \*\*cmd\_options) File "/home/alibek/Desktop/my\_porject/venv/local/lib/python2.7/site-packages/django/core/management/base.py", line 399, in execute output = self.handle(\*args, \*\*options) File "/home/alibek/Desktop/my\_porject/venv/local/lib/python2.7/site-packages/channels/management/commands/runworker.py", line 83, in handle worker.run() File "/home/alibek/Desktop/my\_porject/venv/local/lib/python2.7/site-packages/channels/worker.py", line 151, in run consumer\_finished.send(sender=self.\_\_class\_\_) File "/home/alibek/Desktop/my\_porject/venv/local/lib/python2.7/site-packages/django/dispatch/dispatcher.py", line 192, in send response = receiver(signal=self, sender=sender, \*\*named) File "/home/alibek/Desktop/my\_porject/venv/local/lib/python2.7/site-packages/channels/message.py", line 105, in send\_and\_flush sender.send(message, immediately=True) File "/home/alibek/Desktop/my\_porject/venv/local/lib/python2.7/site-packages/channels/channel.py", line 88, in send self.channel\_layer.send\_group(self.name, content) File "/home/alibek/Desktop/my\_porject/venv/local/lib/python2.7/site-packages/asgi\_redis/core.py", line 289, in send\_group assert self.valid\_group\_name(group), "Group name not valid" File "/home/alibek/Desktop/my\_porject/venv/local/lib/python2.7/site-packages/asgiref/base\_layer.py", line 122, in valid\_group\_name raise TypeError("Group name must be a valid unicode string containing only ASCII alphanumerics, hyphens, or periods.") TypeError: Group name must be a valid unicode string containing only ASCII alphanumerics, hyphens, or periods. ``` Backend code: ```py def chat_room(request, room_name_url): room = get_object_or_404(ChatRoom, slug=room_name_url) return render(request, 'chat_room.html', {'room': room}) def msg_consumer(message): # Save to model room_slug = message.content['room'] room = ChatRoom.objects.filter(slug=room_slug) # Broadcast to listening sockets Group("chat-%s" % room).send({ "text": message.content['message'], }) # Connected to websocket.connect @channel_session @enforce_ordering def connect(message): message.reply_channel.send({"accept": True}) # Url of room e.g localhost:8000/chat/django path = urlsplit(message.content['path']) # Returns slug field of room from url, i.e. 'django' room = path[2].strip("/chat/").decode('utf-8') # Save room in session and add us to the group message.channel_session['room'] = room message.content['room'] = room Group("chat-%s" % room).add(message.reply_channel) # Connected to websocket.receive @channel_session @enforce_ordering def receive(message): # Stick the message onto the processing queue Channel("chat-messages").send({ "room": message.channel_session['room'], "message": message['text'], }) # Connected to websocket.disconnect @channel_session @enforce_ordering def disconnect(message): Group("chat-%s" % message.channel_session['room']).discard(message.reply_channel) ``` Frontend code: ```js var messages = document.getElementById("messages"); var text = document.getElementById("text"); var button = document.getElementById("send"); var ws_scheme = window.location.protocol === "https:" ? "wss" : "ws"; var socket = new ReconnectingWebSocket( ws_scheme + '://' + window.location.host + window.location.pathname ); socket.onmessage = function (e) { var message = document.createElement("p"); var data = JSON.parse(e.data); var noQuotes = data.msg.split('"').join(''); var user = data.user; message.innerHTML = user + ': ' + noQuotes; console.log(JSON.parse(e.data)); messages.appendChild(message); }; button.addEventListener("click", function (event) { event.preventDefault(); var data_to_server = JSON.stringify(text.value); socket.send(data_to_server); text.value = ''; text.focus(); }); ``` Also, I don't like using sessions, Django throws an error for the first time entering into the app.<issue_comment>username_1: Well, I found the trouble. I used an object instead of a string: ```py room = ChatRoom.objects.filter(slug=room_slug) # Broadcast to listening sockets Group("chat-%s" % room).send({... ``` Solution: ```py Group("chat-%s" % room_slug).send({... ``` Upvotes: 2 <issue_comment>username_2: In my case it was an autocomplete typo... ``` await self.channel_layer.group_discard(self.user_group, -->self.channel_layer<--) ``` vs ``` await self.channel_layer.group_discard(self.user_group, -->self.channel_name<--) ``` Upvotes: 1
2018/03/17
1,923
7,085
<issue_start>username_0: What am I trying to do? Display 20 images vertically. Have to request for the next 20 images when scrolled to the last image (pagination or infinite scroll). Whats happening? Using recyclerview with height set to wrap\_content: None of the images are being displayed. Using recylerview with height set to match\_parent: I could see the images, but the following request is getting called when the 5th image is being displayed. But i wanted it to be called when scrolled to last image. Heading ------- activity\_main.xml ``` xml version="1.0" encoding="utf-8"? ``` item\_movie.xml ``` xml version="1.0" encoding="utf-8"? ``` Main\_activity.java ``` package technology.nine.moviedb; import android.support.v7.app.AppCompatActivity; import android.os.Bundle; import android.support.v7.widget.LinearLayoutManager; import android.support.v7.widget.RecyclerView; import android.util.Log; import android.view.View; import android.widget.AbsListView; import android.widget.ProgressBar; import android.widget.Toast; import java.util.List; import retrofit2.Call; import retrofit2.Callback; import retrofit2.Response; import technology.nine.moviedb.adapter.MovieAdapter; import technology.nine.moviedb.api.ApiClient; import technology.nine.moviedb.api.ApiInterface; import technology.nine.moviedb.model.Movie; import technology.nine.moviedb.model.MovieResponse; public class MainActivity extends AppCompatActivity { public static final String API_KEY = "<KEY>"; Boolean isScrolling = false; int currentPage = 1; int currentItems, totalItems, scrollOutItems; RecyclerView recyclerView; LinearLayoutManager linearLayoutManager; MovieAdapter movieAdapter; ProgressBar progressBar; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); progressBar = (ProgressBar) findViewById(R.id.progress); recyclerView = (RecyclerView) findViewById(R.id.recycler_view); linearLayoutManager = new LinearLayoutManager(this); movieAdapter = new MovieAdapter(R.layout.item_movie, getApplicationContext()); recyclerView.setAdapter(movieAdapter); recyclerView.setLayoutManager(linearLayoutManager); fetchData(); } private void fetchData() { progressBar.setVisibility(View.VISIBLE); Log.i("progress bar", "called"); ApiInterface apiInterface = ApiClient.getClient().create(ApiInterface.class); Call call = apiInterface.getTopRatedMovies(API\_KEY, currentPage); call.enqueue(new Callback() { @Override public void onResponse(Call call, Response response) { List movies = response.body().getResults(); movieAdapter.addAll(movies); recyclerView.addOnScrollListener(new RecyclerView.OnScrollListener() { @Override public void onScrollStateChanged(RecyclerView recyclerView, int newState) { super.onScrollStateChanged(recyclerView, newState); if (newState == AbsListView.OnScrollListener.SCROLL\_STATE\_TOUCH\_SCROLL){ isScrolling = true; } } @Override public void onScrolled(RecyclerView recyclerView, int dx, int dy) { super.onScrolled(recyclerView, dx, dy); currentItems = linearLayoutManager.getChildCount(); totalItems = linearLayoutManager.getItemCount(); scrollOutItems = linearLayoutManager.findFirstVisibleItemPosition(); if(isScrolling & (currentItems + scrollOutItems == totalItems)){ isScrolling = false; currentPage += 1; fetchData(); } } }); progressBar.setVisibility(View.GONE); } @Override public void onFailure(Call call, Throwable t) { Toast.makeText(MainActivity.this, t.getMessage(), Toast.LENGTH\_SHORT).show(); } }); } } ``` MovieAdapter.java ``` package technology.nine.moviedb.adapter; import android.content.Context; import android.support.v7.widget.RecyclerView; import android.util.Log; import android.view.LayoutInflater; import android.view.View; import android.view.ViewGroup; import android.widget.ImageView; import android.widget.LinearLayout; import android.widget.RelativeLayout; import android.widget.TextView; import com.bumptech.glide.Glide; import com.bumptech.glide.load.engine.DiskCacheStrategy; import com.bumptech.glide.request.RequestOptions; import java.util.ArrayList; import java.util.List; import technology.nine.moviedb.GlideApp; import technology.nine.moviedb.R; import technology.nine.moviedb.api.ApiClient; import technology.nine.moviedb.model.Movie; /** * Created by Ganesh on 25-02-2018. */ public class MovieAdapter extends RecyclerView.Adapter { private List movies; private int rowLayout; private Context context; public MovieAdapter(int rowLayout, Context context) { movies = new ArrayList<>(); this.rowLayout = rowLayout; this.context = context; } @Override public MovieViewHolder onCreateViewHolder(ViewGroup parent, int viewType) { View view = LayoutInflater.from(parent.getContext()).inflate(rowLayout, parent, false); return new MovieViewHolder(view); } @Override public void onBindViewHolder(MovieViewHolder holder, int position) { //holder.title.setText(movies.get(position).getTitle()); String img\_url\_path = ApiClient.IMAGE\_URL + movies.get(position).getPosterPath(); GlideApp.with(context) .load(img\_url\_path) .override(holder.poster.getWidth(), holder.poster.getHeight()) .diskCacheStrategy(DiskCacheStrategy.AUTOMATIC) .into(holder.poster); } @Override public int getItemCount() { return movies.size(); } public class MovieViewHolder extends RecyclerView.ViewHolder { LinearLayout moviesLayout; ImageView poster; public MovieViewHolder(View v) { super(v); moviesLayout = (LinearLayout) v.findViewById(R.id.item\_movie); poster = (ImageView) v.findViewById(R.id.image\_view); } } public void addAll(List movies) { for(Movie m: movies) { add(m); } } private void add(Movie m) { movies.add(m); notifyItemInserted(movies.size() - 1); } } ``` Github [link](https://github.com/ganeshmunisifreddy/MovieDB) for this project. Help me to resolve this issue. Thanks in advance.<issue_comment>username_1: item\_movie.xml ``` xml version="1.0" encoding="utf-8"? ``` set fix height of layout---------50dp,60dp,100dp etc... Upvotes: 0 <issue_comment>username_2: as your list is horzontal you should know match\_parent give fixed size to list so it will fill 5 item and give no more space consider using wrap\_content for dynamic size item\_movie.xml ``` ``` wrap\_content might work depending on case for height so check it first and if want to keep the aspect ratio you can use weight with zero height android will do the work ``` ``` > > android:scaleType="centerInside" > > > will render image to keep it scaling Upvotes: 1 <issue_comment>username_3: Everything seems to be fine. Try modifying the condition inside the onScrolled function as below: ``` int buffer = 3;//you can modify this accordingly. if(isScrolling & (scrollOutItem > (totalItems - buffer))){ //call fetchData(); } ``` Upvotes: 2 [selected_answer]
2018/03/17
668
2,033
<issue_start>username_0: ``` double avg = (double)ListUtil.sum(list)/list.size(); k =10; while (avg <= k - 0.5) { list.add(k); avg = (double) ListUtil.sum(list)/list.size(); System.out.println(avg); } ``` I have a while loop like above with `condition is <= 9.5` Output ``` 9.0 9.25 9.4 9.5 9.571428571428571 ``` Why it run till `9.571428571428571` not `9.5` ?<issue_comment>username_1: You are looping one time too many. Try doing the calculation last so that the `while` conditional comes directly after it. ``` double avg = (double)ListUtil.sum(list)/list.size(); k = 10; while (avg <= k - 0.5) { list.add(k); System.out.println(avg); avg = (double) ListUtil.sum(list)/list.size(); } ``` Upvotes: 0 <issue_comment>username_2: **EDIT**: We should check if adding a value to the list will push the average out of the required state before adding it. ``` double avg = (double)ListUtil.sum(list)/list.size(); double editedAvg = avg; k = 10; while (avg <= k - 0.5) { avg = (double) avg + ((k - avg)/(double)(list.size() + 1)); if(avg <= k - 0.5) { editedAvg = avg; list.add(k); System.out.println(editedAvg); } } System.out.println("Final avg: " + editedAvg); ``` [Demo](https://ideone.com/60uIiY) The formula for adding a value to the average is taken from: <https://math.stackexchange.com/questions/22348/how-to-add-and-subtract-values-from-an-average> Also, you should really look into using a debugger which is probably available in your IDE - It will allow you to go step by step through you code and will help you figure out what step causes the wrong output. <http://www.vogella.com/tutorials/EclipseDebugging/article.html> And for fun (but still a useful debugging technique no matter how good a programer you are) <https://en.wikipedia.org/wiki/Rubber_duck_debugging> Thank you to *dasblinkenlight* for pointing out a problem with the previous answer and for providing the demo source code. Upvotes: 2 [selected_answer]
2018/03/17
244
786
<issue_start>username_0: I can share files those added from other folders except camera. I am getting following exception in Lenova K3 Note. ``` java.lang.IllegalArgumentException: Failed to find configured root that contains /storage/9016-4EF8/DCIM/Camera/IMG_20180317_111252.jpg ``` file\_paths.xml ``` xml version="1.0" encoding="utf-8"? ```<issue_comment>username_1: > > can share files those added from other folders except camera > > > No. Has nothing to do with camera. You cannot use FileProvider for removable SD card. Upvotes: 0 <issue_comment>username_2: Yes, You cannot use FileProvider for removable SD card. But you can use `root-path` with a warning > > Element root-path is not allowed here > > > like following ``` ``` Upvotes: 4 [selected_answer]
2018/03/17
3,216
10,610
<issue_start>username_0: I am pragmatically building a nested JSON from a table. Json looks something like this: ``` {"date":"03-16-2018 06:57:02", "details": [ { "motorstate":0, "startTime":"03-16-2018 20:41:57", }, { "motorstate":0, "startTime":"03-16-2018 06:57:02", } ] }, {"date":"03-15-2018 08:08:48", "details": [ { "motorstate":0, "startTime":"03-16-2018 03:53:30", } ] } ``` If you look into the above example, the second record: ``` {"date":"03-15-2018 08:08:48", "details": [ { "motorstate":0, "startTime":"03-16-2018 03:53:30", } ] } ``` The dates are mismatching. This is because the date shown here is in IST but actually stored in UTC in my Google Datastore. In UTC, its still 03-15-2018 and not 03-16-2018. My question is how can we perform a date difference in different timezones other than UTC in Java? The Date.getTime() method always give the difference in UTC and not in local Timezone.<issue_comment>username_1: Below method will return you your UTC time to current Timezone and this worked for me ``` public static Date getLocalDateObjFromUTC(String date, String time) throws ParseException { String dateAndTimeInUTC = date + " " + time; SimpleDateFormat localDateFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss", Locale.getDefault()); localDateFormat.setTimeZone(TimeZone.getTimeZone("UTC")); Date dateInLocalTimeZone = localDateFormat.parse(dateAndTimeInUTC); return dateInLocalTimeZone; } ``` Upvotes: 0 <issue_comment>username_2: ``` public String getSummary(String deviceGuid) { Query query = Query.newEntityQueryBuilder() .setKind("sensordata") .setFilter(PropertyFilter.eq("deviceguid", deviceGuid)) .setOrderBy(OrderBy.desc("startTime")) .build(); QueryResults resultList = datastore.run(query); if(!resultList.hasNext()) { log.warning("No record found.."); return "No record found"; } com.google.cloud.datastore.Entity e =null; com.google.cloud.datastore.Entity pe =null; SensorDataOut sensorDataOut = new SensorDataOut(); SensorSummaryDataOut summary = new SensorSummaryDataOut(); TotalSummaryDataOut totalSummary = new TotalSummaryDataOut(); Calendar calendar = Calendar.getInstance(); calendar.setTimeZone(TimeZone.getTimeZone("IST")); Calendar calendar1 = Calendar.getInstance(); calendar1.setTimeZone(TimeZone.getTimeZone("IST")); SimpleDateFormat sdf = new SimpleDateFormat("MM-dd-yyyy HH:mm:ss"); sdf.setTimeZone(TimeZone.getTimeZone("IST")); SimpleDateFormat sdf1 = new SimpleDateFormat("MM-dd-yyyy"); //sdf.setTimeZone(TimeZone.getTimeZone("IST")); long stopTime; long startTime; long pStartTime; long diffTime; long diffDay; Date pDateWithoutTime; Date eDateWithoutTime; while(resultList.hasNext()) { e = resultList.next(); startTime = (e.contains("startTime"))?e.getTimestamp("startTime").toSqlTimestamp().getTime():0; stopTime = (e.contains("stopTime"))?e.getTimestamp("stopTime").toSqlTimestamp().getTime():0; //log.info("Start Date : " + e.getTimestamp("startTime").toString() + " - " + String.valueOf(startTime)); //log.info("Stop Date : " + e.getTimestamp("stopTime").toString() + " - " + String.valueOf(stopTime)); //log.info("Usage Volume :" + String.valueOf(e.getLong("usageVolume"))); sensorDataOut = new SensorDataOut(); calendar.setTimeInMillis(stopTime); sensorDataOut.stopTime = sdf.format(calendar.getTime()); calendar.setTimeInMillis(startTime); sensorDataOut.startTime = sdf.format(calendar.getTime()); sensorDataOut.motorstate = (e.contains("motorstate"))?(int)e.getLong("motorstate"):-1; sensorDataOut.startVolume = (e.contains("startVolume"))?(int)e.getLong("startVolume"):-1; sensorDataOut.stopVolume = (e.contains("stopVolume"))?(int)e.getLong("stopVolume"):-1; sensorDataOut.usageTime = (e.contains("usageTime"))?e.getLong("usageTime"):-1; sensorDataOut.usageVolume = (e.contains("usageVolume"))?(int)e.getLong("usageVolume"):-1; if(pe!=null) { //Get the date difference in terms of days. If it is same day then add the volume consumed pStartTime= pe.getTimestamp("startTime").toSqlTimestamp().getTime(); try{ calendar.setTimeInMillis(pStartTime); pDateWithoutTime = sdf1.parse(sdf1.format(calendar.getTime())); calendar1.setTimeInMillis(e.getTimestamp("startTime").toSqlTimestamp().getTime()); eDateWithoutTime = sdf1.parse(sdf1.format(calendar1.getTime())); } catch(Exception ex){ log.info("Exception while parsing date"); continue; } diffTime = Math.abs(pDateWithoutTime.getTime() - eDateWithoutTime.getTime()); diffDay = TimeUnit.DAYS.convert(diffTime, TimeUnit.MILLISECONDS); //log.info("pDateWithoutTime: " + pDateWithoutTime + ", eDateWithoutTime: " + eDateWithoutTime + ", consumedVolume: " // + sensorDataOut.usageVolume; if(diffDay!=0) //If not same day { totalSummary.totVolume = totalSummary.totVolume + summary.totVolume; totalSummary.totDuration = totalSummary.totDuration + summary.totDuration; totalSummary.details.add(summary); summary = new SensorSummaryDataOut(); } } summary.date = sensorDataOut.startTime; summary.totVolume = summary.totVolume + sensorDataOut.usageVolume; summary.totDuration = summary.totDuration + sensorDataOut.usageTime; summary.details.add(sensorDataOut); pe = e; } if(summary.details.size()>0) { totalSummary.totVolume = totalSummary.totVolume + summary.totVolume; totalSummary.totDuration = totalSummary.totDuration + summary.totDuration; totalSummary.details.add(summary); summary = new SensorSummaryDataOut(); } totalSummary.avgVolume = totalSummary.totVolume/totalSummary.details.size(); totalSummary.deviceguid = deviceGuid; String json = ""; Gson gson = new Gson(); json = gson.toJson(totalSummary); return json; } //End of Function ``` Upvotes: 0 <issue_comment>username_3: tl;dr ===== > > Date difference in different timezones in java > > > ``` Period.between( LocalDateTime.parse( "03-15-2018 08:08:48" , DateTimeFormatter.ofPattern( “MM-dd-uuuu HH:mm:ss” ) ) .atZone( ZoneId.of( ”Asia/Kolkata" ) ) .toLocalDate() , LocalDate.now( ZoneId.of( ”Asia/Kolkata" ) ) ) ``` Details ======= The modern approach uses the *java.time* classes rather than the troublesome old me old legacy date-time classes such as `Date` and `Calendar`. Tip: Avoid custom formatting patterns when serializing date-time values to text. Use standard ISO 8601 formats only. Tip: when exchanging date-time values as text, always include an indicator of the offset-from-UTC and the time zone name. First, parse your input strings as `LocalDateTime` because they lack any indication of offset or zone. Define a formatting pattern to match input. ``` DateTimeFormatter f = DateTimeFormatter.ofPattern( “MM-dd-uuuu HH:mm:ss” ) ; ``` Parse. ``` String input = "03-15-2018 08:08:48" ; LocalDateTime ldt = LocalDateTime.parse( input , f ) ; ``` You claim to know that these inputs were intended to represent a moment in India time. Apply a `ZoneId` to get a `ZonedDateTime`. ``` ZoneId z = ZoneId.of( ”Asia/Kolkata" ) ; ZonedDateTime zdt = ldt.atZone( z ) ; ``` To get the date-only value, extract a `LocalDate`. ``` LocalDate ld = zdt.toLocalDate() ; ``` To represent the delta between dates as a number of years, months, and days unattached to the timeline, use `Period`. ``` Period p = Period.between( ldt , LocalDate.now( z ) ) ; ``` For a count of total days. ``` long days = ChronoUnit.DAYS.between( start , stop ) ; ``` --- About *java.time* ================= The [*java.time*](http://docs.oracle.com/javase/9/docs/api/java/time/package-summary.html) framework is built into Java 8 and later. These classes supplant the troublesome old [legacy](https://en.wikipedia.org/wiki/Legacy_system) date-time classes such as [`java.util.Date`](https://docs.oracle.com/javase/9/docs/api/java/util/Date.html), [`Calendar`](https://docs.oracle.com/javase/9/docs/api/java/util/Calendar.html), & [`SimpleDateFormat`](http://docs.oracle.com/javase/9/docs/api/java/text/SimpleDateFormat.html). The [*Joda-Time*](http://www.joda.org/joda-time/) project, now in [maintenance mode](https://en.wikipedia.org/wiki/Maintenance_mode), advises migration to the [java.time](http://docs.oracle.com/javase/9/docs/api/java/time/package-summary.html) classes. To learn more, see the [*Oracle Tutorial*](http://docs.oracle.com/javase/tutorial/datetime/TOC.html). And search Stack Overflow for many examples and explanations. Specification is [JSR 310](https://jcp.org/en/jsr/detail?id=310). You may exchange *java.time* objects directly with your database. Use a [JDBC driver](https://en.wikipedia.org/wiki/JDBC_driver) compliant with [JDBC 4.2](http://openjdk.java.net/jeps/170) or later. No need for strings, no need for `java.sql.*` classes. Where to obtain the java.time classes? * [**Java SE 8**](https://en.wikipedia.org/wiki/Java_version_history#Java_SE_8), [**Java SE 9**](https://en.wikipedia.org/wiki/Java_version_history#Java_SE_9), and later + Built-in. + Part of the standard Java API with a bundled implementation. + Java 9 adds some minor features and fixes. * [**Java SE 6**](https://en.wikipedia.org/wiki/Java_version_history#Java_SE_6) and [**Java SE 7**](https://en.wikipedia.org/wiki/Java_version_history#Java_SE_7) + Much of the java.time functionality is back-ported to Java 6 & 7 in [***ThreeTen-Backport***](http://www.threeten.org/threetenbp/). * [**Android**](https://en.wikipedia.org/wiki/Android_(operating_system)) + Later versions of Android bundle implementations of the java.time classes. + For earlier Android (<26), the [***ThreeTenABP***](https://github.com/JakeWharton/ThreeTenABP) project adapts *ThreeTen-Backport* (mentioned above). See [*How to use ThreeTenABP…*](http://stackoverflow.com/q/38922754/642706). The [**ThreeTen-Extra**](http://www.threeten.org/threeten-extra/) project extends java.time with additional classes. This project is a proving ground for possible future additions to java.time. You may find some useful classes here such as [`Interval`](http://www.threeten.org/threeten-extra/apidocs/org/threeten/extra/Interval.html), [`YearWeek`](http://www.threeten.org/threeten-extra/apidocs/org/threeten/extra/YearWeek.html), [`YearQuarter`](http://www.threeten.org/threeten-extra/apidocs/org/threeten/extra/YearQuarter.html), and [more](http://www.threeten.org/threeten-extra/apidocs/index.html). Upvotes: 2
2018/03/17
3,049
10,102
<issue_start>username_0: I have one column with JSON data, i want to use that column with my query, but i was try so many thing but can't please guide me for that How to use Column with JSON string with IN Operator in SQL ? i have following table and query. ``` table1 ------------ id | ids | 1 | [1,2] | 2 | [3,4] | 3 | [5] | 4 | [] | 5 | [1,5,6]| ------------- table2 ------------ id | name | 1 | raj | 2 | mohan | 3 | test | 4 | name1 | 5 | hello | ------------- SELECT * FROM `table1` t1 LEFT JOIN table2 t2 ON ( t2.id IN (replace(replace(t1.ids,"[",""),"]","")) ) WHERE p.id = 2 ```<issue_comment>username_1: Below method will return you your UTC time to current Timezone and this worked for me ``` public static Date getLocalDateObjFromUTC(String date, String time) throws ParseException { String dateAndTimeInUTC = date + " " + time; SimpleDateFormat localDateFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss", Locale.getDefault()); localDateFormat.setTimeZone(TimeZone.getTimeZone("UTC")); Date dateInLocalTimeZone = localDateFormat.parse(dateAndTimeInUTC); return dateInLocalTimeZone; } ``` Upvotes: 0 <issue_comment>username_2: ``` public String getSummary(String deviceGuid) { Query query = Query.newEntityQueryBuilder() .setKind("sensordata") .setFilter(PropertyFilter.eq("deviceguid", deviceGuid)) .setOrderBy(OrderBy.desc("startTime")) .build(); QueryResults resultList = datastore.run(query); if(!resultList.hasNext()) { log.warning("No record found.."); return "No record found"; } com.google.cloud.datastore.Entity e =null; com.google.cloud.datastore.Entity pe =null; SensorDataOut sensorDataOut = new SensorDataOut(); SensorSummaryDataOut summary = new SensorSummaryDataOut(); TotalSummaryDataOut totalSummary = new TotalSummaryDataOut(); Calendar calendar = Calendar.getInstance(); calendar.setTimeZone(TimeZone.getTimeZone("IST")); Calendar calendar1 = Calendar.getInstance(); calendar1.setTimeZone(TimeZone.getTimeZone("IST")); SimpleDateFormat sdf = new SimpleDateFormat("MM-dd-yyyy HH:mm:ss"); sdf.setTimeZone(TimeZone.getTimeZone("IST")); SimpleDateFormat sdf1 = new SimpleDateFormat("MM-dd-yyyy"); //sdf.setTimeZone(TimeZone.getTimeZone("IST")); long stopTime; long startTime; long pStartTime; long diffTime; long diffDay; Date pDateWithoutTime; Date eDateWithoutTime; while(resultList.hasNext()) { e = resultList.next(); startTime = (e.contains("startTime"))?e.getTimestamp("startTime").toSqlTimestamp().getTime():0; stopTime = (e.contains("stopTime"))?e.getTimestamp("stopTime").toSqlTimestamp().getTime():0; //log.info("Start Date : " + e.getTimestamp("startTime").toString() + " - " + String.valueOf(startTime)); //log.info("Stop Date : " + e.getTimestamp("stopTime").toString() + " - " + String.valueOf(stopTime)); //log.info("Usage Volume :" + String.valueOf(e.getLong("usageVolume"))); sensorDataOut = new SensorDataOut(); calendar.setTimeInMillis(stopTime); sensorDataOut.stopTime = sdf.format(calendar.getTime()); calendar.setTimeInMillis(startTime); sensorDataOut.startTime = sdf.format(calendar.getTime()); sensorDataOut.motorstate = (e.contains("motorstate"))?(int)e.getLong("motorstate"):-1; sensorDataOut.startVolume = (e.contains("startVolume"))?(int)e.getLong("startVolume"):-1; sensorDataOut.stopVolume = (e.contains("stopVolume"))?(int)e.getLong("stopVolume"):-1; sensorDataOut.usageTime = (e.contains("usageTime"))?e.getLong("usageTime"):-1; sensorDataOut.usageVolume = (e.contains("usageVolume"))?(int)e.getLong("usageVolume"):-1; if(pe!=null) { //Get the date difference in terms of days. If it is same day then add the volume consumed pStartTime= pe.getTimestamp("startTime").toSqlTimestamp().getTime(); try{ calendar.setTimeInMillis(pStartTime); pDateWithoutTime = sdf1.parse(sdf1.format(calendar.getTime())); calendar1.setTimeInMillis(e.getTimestamp("startTime").toSqlTimestamp().getTime()); eDateWithoutTime = sdf1.parse(sdf1.format(calendar1.getTime())); } catch(Exception ex){ log.info("Exception while parsing date"); continue; } diffTime = Math.abs(pDateWithoutTime.getTime() - eDateWithoutTime.getTime()); diffDay = TimeUnit.DAYS.convert(diffTime, TimeUnit.MILLISECONDS); //log.info("pDateWithoutTime: " + pDateWithoutTime + ", eDateWithoutTime: " + eDateWithoutTime + ", consumedVolume: " // + sensorDataOut.usageVolume; if(diffDay!=0) //If not same day { totalSummary.totVolume = totalSummary.totVolume + summary.totVolume; totalSummary.totDuration = totalSummary.totDuration + summary.totDuration; totalSummary.details.add(summary); summary = new SensorSummaryDataOut(); } } summary.date = sensorDataOut.startTime; summary.totVolume = summary.totVolume + sensorDataOut.usageVolume; summary.totDuration = summary.totDuration + sensorDataOut.usageTime; summary.details.add(sensorDataOut); pe = e; } if(summary.details.size()>0) { totalSummary.totVolume = totalSummary.totVolume + summary.totVolume; totalSummary.totDuration = totalSummary.totDuration + summary.totDuration; totalSummary.details.add(summary); summary = new SensorSummaryDataOut(); } totalSummary.avgVolume = totalSummary.totVolume/totalSummary.details.size(); totalSummary.deviceguid = deviceGuid; String json = ""; Gson gson = new Gson(); json = gson.toJson(totalSummary); return json; } //End of Function ``` Upvotes: 0 <issue_comment>username_3: tl;dr ===== > > Date difference in different timezones in java > > > ``` Period.between( LocalDateTime.parse( "03-15-2018 08:08:48" , DateTimeFormatter.ofPattern( “MM-dd-uuuu HH:mm:ss” ) ) .atZone( ZoneId.of( ”Asia/Kolkata" ) ) .toLocalDate() , LocalDate.now( ZoneId.of( ”Asia/Kolkata" ) ) ) ``` Details ======= The modern approach uses the *java.time* classes rather than the troublesome old me old legacy date-time classes such as `Date` and `Calendar`. Tip: Avoid custom formatting patterns when serializing date-time values to text. Use standard ISO 8601 formats only. Tip: when exchanging date-time values as text, always include an indicator of the offset-from-UTC and the time zone name. First, parse your input strings as `LocalDateTime` because they lack any indication of offset or zone. Define a formatting pattern to match input. ``` DateTimeFormatter f = DateTimeFormatter.ofPattern( “MM-dd-uuuu HH:mm:ss” ) ; ``` Parse. ``` String input = "03-15-2018 08:08:48" ; LocalDateTime ldt = LocalDateTime.parse( input , f ) ; ``` You claim to know that these inputs were intended to represent a moment in India time. Apply a `ZoneId` to get a `ZonedDateTime`. ``` ZoneId z = ZoneId.of( ”Asia/Kolkata" ) ; ZonedDateTime zdt = ldt.atZone( z ) ; ``` To get the date-only value, extract a `LocalDate`. ``` LocalDate ld = zdt.toLocalDate() ; ``` To represent the delta between dates as a number of years, months, and days unattached to the timeline, use `Period`. ``` Period p = Period.between( ldt , LocalDate.now( z ) ) ; ``` For a count of total days. ``` long days = ChronoUnit.DAYS.between( start , stop ) ; ``` --- About *java.time* ================= The [*java.time*](http://docs.oracle.com/javase/9/docs/api/java/time/package-summary.html) framework is built into Java 8 and later. These classes supplant the troublesome old [legacy](https://en.wikipedia.org/wiki/Legacy_system) date-time classes such as [`java.util.Date`](https://docs.oracle.com/javase/9/docs/api/java/util/Date.html), [`Calendar`](https://docs.oracle.com/javase/9/docs/api/java/util/Calendar.html), & [`SimpleDateFormat`](http://docs.oracle.com/javase/9/docs/api/java/text/SimpleDateFormat.html). The [*Joda-Time*](http://www.joda.org/joda-time/) project, now in [maintenance mode](https://en.wikipedia.org/wiki/Maintenance_mode), advises migration to the [java.time](http://docs.oracle.com/javase/9/docs/api/java/time/package-summary.html) classes. To learn more, see the [*Oracle Tutorial*](http://docs.oracle.com/javase/tutorial/datetime/TOC.html). And search Stack Overflow for many examples and explanations. Specification is [JSR 310](https://jcp.org/en/jsr/detail?id=310). You may exchange *java.time* objects directly with your database. Use a [JDBC driver](https://en.wikipedia.org/wiki/JDBC_driver) compliant with [JDBC 4.2](http://openjdk.java.net/jeps/170) or later. No need for strings, no need for `java.sql.*` classes. Where to obtain the java.time classes? * [**Java SE 8**](https://en.wikipedia.org/wiki/Java_version_history#Java_SE_8), [**Java SE 9**](https://en.wikipedia.org/wiki/Java_version_history#Java_SE_9), and later + Built-in. + Part of the standard Java API with a bundled implementation. + Java 9 adds some minor features and fixes. * [**Java SE 6**](https://en.wikipedia.org/wiki/Java_version_history#Java_SE_6) and [**Java SE 7**](https://en.wikipedia.org/wiki/Java_version_history#Java_SE_7) + Much of the java.time functionality is back-ported to Java 6 & 7 in [***ThreeTen-Backport***](http://www.threeten.org/threetenbp/). * [**Android**](https://en.wikipedia.org/wiki/Android_(operating_system)) + Later versions of Android bundle implementations of the java.time classes. + For earlier Android (<26), the [***ThreeTenABP***](https://github.com/JakeWharton/ThreeTenABP) project adapts *ThreeTen-Backport* (mentioned above). See [*How to use ThreeTenABP…*](http://stackoverflow.com/q/38922754/642706). The [**ThreeTen-Extra**](http://www.threeten.org/threeten-extra/) project extends java.time with additional classes. This project is a proving ground for possible future additions to java.time. You may find some useful classes here such as [`Interval`](http://www.threeten.org/threeten-extra/apidocs/org/threeten/extra/Interval.html), [`YearWeek`](http://www.threeten.org/threeten-extra/apidocs/org/threeten/extra/YearWeek.html), [`YearQuarter`](http://www.threeten.org/threeten-extra/apidocs/org/threeten/extra/YearQuarter.html), and [more](http://www.threeten.org/threeten-extra/apidocs/index.html). Upvotes: 2
2018/03/17
615
2,050
<issue_start>username_0: I am trying to build an online random forest classifier. In a for loop I faced an error that I can not find the reason for. ``` clf = RandomForestClassifier(n_estimators=1, warm_start=True) ``` In the for loop, I am increasing the number of estimators while reading new data. ``` clf.n_estimators = (clf.n_estimators + 1) clf = clf.fit(data_batch, label_batch) ``` After going through the loop for 3 times, when running the code predict as follows in the loop: ``` predicted = clf.predict(data_batch) ``` I get the following error: ``` ValueError: non-broadcastable output operand with shape (500,1) doesn't match the broadcast shape (500,2) ``` While the data is in shape (500,153) and the label is (500,). Here is a more complete code: ``` clf = RandomForestClassifier(n_estimators=1, warm_start=True) clf = clf.fit(X_train, y_train) predicted = clf.predict(X_test) batch_size = 500 for i in xrange(batch_init_size, records, batch_size): from_ = (i + 1) to_ = (i + batch_size + 1) data_batch = data[from_:to_, :] label_batch = label[from_:to_] predicted = clf.predict(data_batch) clf.n_estimators = (clf.n_estimators + 1) clf = clf.fit(data_batch, label_batch) ```<issue_comment>username_1: I found the cause of the problem: As the data is imbalanced, there is a high possibility that all the samples from some batches are all from a single class. In such cases in file forest.py is unable to operate on one single dimension and one 2 dimension matrices. Here is the code in forest.py from scikit-learn: ``` def accumulate_prediction(predict, X, out, lock): prediction = predict(X, check_input=False) with lock: if len(out) == 1: out[0] += prediction else: for i in range(len(out)): out[i] += prediction[i] ``` Upvotes: 1 <issue_comment>username_2: Yes, the error is due to batches having an unequal number of sample classes. I solved this by using a batch size which will consist of all classes. Upvotes: 2
2018/03/17
957
2,838
<issue_start>username_0: I am obtaining two email id's from database by a query. Both stored in a single variable. I want to send email to these two addresses by PHPMailer keeping them in cc. Currently only one email is being selected and passed in cc. Can I know where am I going wrong. My code here, ``` $get_cc_email_id_sql=mysql_query("select * from tbl_name where column_name IN(13,5)"); $user_email_cc=''; while ($get_data_cc=mysql_fetch_array($get_cc_email_id_sql)) { $user_email_cc=$get_data_cc['email']; } $mail = new PHPMailer(); $subject = "Mail"; $content ="XYZ"; $mail->IsSMTP(); $mail->SMTPDebug = 0; $mail->SMTPAuth = TRUE; $mail->SMTPSecure = "ssl"; $mail->Debugoutput = 'html'; $mail->Port = 465; $mail->Username = "<EMAIL>"; // Changed username and password from $mail->Password = "xyz"; $mail->Host = "ssl://smtp.xyz.com"; $mail->Mailer = "smtp"; $mail->SetFrom("<EMAIL>", "XYZ"); $mail->AddAddress(<EMAIL>); $mail->AddCC($user_email_cc); $mail->Subject = $subject; $mail->WordWrap = 80; $mail->MsgHTML($content); $mail->IsHTML(true); if(!$mail->Send()) echo "Problem sending mail."; else echo "Mail Sent"; ```<issue_comment>username_1: use `$user_email_cc` as array then it will store you both email a `0 and 1` position ``` $user_email_cc=array(); while ($get_data_cc=mysql_fetch_array($get_cc_email_id_sql)) { $user_email_cc[] =$get_data_cc['email']; } ``` **New Code** ``` $get_cc_email_id_sql=mysql_query("select * from tbl_name where column_name IN(13,5)"); $user_email_cc=array(); while ($get_data_cc=mysql_fetch_array($get_cc_email_id_sql)) { $user_email_cc[] =$get_data_cc['email']; } $mail = new PHPMailer(); $subject = "Mail"; $content ="XYZ"; $mail->IsSMTP(); $mail->SMTPDebug = 0; $mail->SMTPAuth = TRUE; $mail->SMTPSecure = "ssl"; $mail->Debugoutput = 'html'; $mail->Port = 465; $mail->Username = "<EMAIL>"; // Changed username and password from $mail->Password = "xyz"; $mail->Host = "ssl://smtp.xyz.com"; $mail->Mailer = "smtp"; $mail->SetFrom("<EMAIL>", "XYZ"); foreach($user_email_cc as $email_cc){ $mail->AddCC($email_cc); } $mail->AddAddress(<EMAIL>); $mail->Subject = $subject; $mail->WordWrap = 80; $mail->MsgHTML($content); $mail->IsHTML(true); if(!$mail->Send()) echo "Problem sending mail."; else echo "Mail Sent"; ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: Can you call `$mail->AddCC(...)` multiple times, just like `$mail->AddAddress(...)`? Upvotes: 0 <issue_comment>username_3: use this code ``` $mail->AddCC('<EMAIL>', 'Person One'); $mail->AddCC('<EMAIL>', 'Person Two'); ``` Upvotes: 2
2018/03/17
1,910
7,631
<issue_start>username_0: I am trying to understand how the compiler checks whether the position for a type parameter is covariant or contravariant. As far as I know, if the type parameter is annotated with the +, which is the covariant annotation, then any method cannot have a input parameter typed with that class/trait's type parameter. For example, `bar` cannot have a parameter of type `T`. ``` class Foo[+T] { def bar(param: T): Unit = println("Hello foo bar") } ``` Because the position for the parameter of `bar()` is considered to be negative, which means any type parameter in that position is in a contravariant position. I am curious how the Scala compiler can find if every location in the class/trait is positive, negative, or neutral. It seems that there exist some rules like flipping its position in some condition but couldn't understand it clearly. Also, if possible, I would like to know how these rules are defined. For example, it seems that parameters for methods defined in a class that has covariant annotation, like `bar()` method in `Foo` class, should have contravariant class type. Why?<issue_comment>username_1: > > I am curious how the Scala compiler can find if every location in the > class/trait is positive, negative, or neutral. It seems that there > exist some rules like flipping its position in some condition but > couldn't understand it clearly. > > > The [Scala compiler](https://github.com/scala/scala) has a phase called parser (like most compilers), which goes over the text and parses out tokens. One of these tokens is called variance. If we dive into the detail, there's a method called [`Parsers.typeParamClauseOpt`](https://github.com/scala/scala/blob/2.13.x/src/compiler/scala/tools/nsc/ast/parser/Parsers.scala#L2407) which is responsible for parsing out the type parameter clause. The part relevant to your question is this: ``` def typeParam(ms: Modifiers): TypeDef = { var mods = ms | Flags.PARAM val start = in.offset if (owner.isTypeName && isIdent) { if (in.name == raw.PLUS) { in.nextToken() mods |= Flags.COVARIANT } else if (in.name == raw.MINUS) { in.nextToken() mods |= Flags.CONTRAVARIANT } } ``` The parser looks for the `+` and `-` signs in the type parameter signature, and creates a class called `TypeDef` which describes the type and states that it is covariant, contravariant or invariant. > > Also, if possible, I would like to know how these rules are defined. > > > Variance rules are universal, and they stem from a branch of mathematics called [Category Theory](https://en.wikipedia.org/wiki/Category_theory). More specifically, they're derived from [Covariant and Contravariant Functors](https://en.wikipedia.org/wiki/Functor#Covariance_and_contravariance) and the *composition* between the two. If you want to learn more on these rules, that would be the path I'd take. Additionally, there is a class called [`Variance`](https://github.com/scala/scala/blob/2.13.x/src/reflect/scala/reflect/internal/Variance.scala) in the Scala compiler which looks like a helper class in regards to variance rules, if you want to take a deeper look. Upvotes: 2 <issue_comment>username_2: To verify the correctness of variance annotation, the compiler classifies all positions in a class or trait body as positive (`+`), negative (`-`) or neutral. A "position" is any location in the class or trait (but from now on I'll just write "class") body where a type parameter may be used. The compiler checks each use of each of the class's type parameters: * `+T` type parameters (covariant/flexible) may only be used in positive positions. * `-T` type parameters (contravariant) may only be used in negative positions. * `T` type parameters (invariant/rigid) may be used in any position, and is therefore, the only kind of type parameter that can be used in neutral positions. To classify the positions, the compiler starts from the given class declaration of a type parameter and moves inward through deeper nesting levels. Positions at the top level of the declaring class are classified as positive. By default, positions at deeper nesting levels are classified the same as that at enclosing levels, but there are three exceptions where the classification changes: 1. Method value parameter positions are classified to the flipped classification relative to positions outside the method, where: * the flip of a positive classification is negative * the flip of a negative classification is positive * the flip of a neutral classification remains neutral. 2. Besides method value parameter positions, the current classification is also flipped at the type parameters of methods. 3. A classification is sometimes flipped at the type argument position of a type, such as the `Arg` in `C[Arg]`, depending on the variance of the corresponding type parameter. If `C`'s type parameter is: * `+T` then the classification stays the same. * `-T` then the current classification is flipped * `T` then the current classification is changed to neutral To understand better, here's a contrived example, where all positions are annotated with their classifications, given by the compiler: ``` abstract class Cat[-T, +U] { def meow[W-](volume: T-, listener: Cat[U+, T-]-): Cat[Cat[U+, T-]-, U+]+ } ``` * The type parameter `W` is in a negative (contravariant) position because of rule number 2 stated above. (So it is flipped relative to positions outside the method - which are stated to be positive by default, so the compiler flips it and that is why it becomes negative.). * The value parameter `volume` is in a negative position (contravariant) because of rule number 1. (So it is flipped in the same manner as `W`) * The value parameter `listener`, is in a negative position for the same reason as `volume`. Looking at the positions of its type arguments `U` and `T` inside the type `Cat`, they are flipped because `Cat` is in a negative position, and according to rule number 3 it must be flipped. * The result type of the method is positive because it's considered outside the method. Looking inside the result type of `meow`: the position of the first `Cat[U, T]` is negative because `Cat`'s first type parameter, `T` is annotated with a `-`; while the second type argument, `U` is positive since `Cat`'s second type parameter, `U` is annotated with a `+`. The two positions remain unchanged (are not flipped) because the flipping rule does not apply here (rule number 3), since `Cat` is in a positive position. But the types `U` and `T` inside the first argument of `Cat` are flipped because the flipping rule does apply here - that `Cat` is in a negative position. As you can see it's quite hard to keep track of variance positions. That's why is a welcome relief that the Scala compiler does this job for you. Once the classification is computed, the compiler checks that each type parameter is only used in positions that are classified appropriately. In this case, `T` is only used in negative positions, while `U` is only used in positive positions. So class `Cat` is type correct. The rules and example are taken directly from the [Programming in Scala](https://booksites.artima.com/programming_in_scala_4ed) book by <NAME>, <NAME> and <NAME>. This also answers your second question: based on these rules we can infer that method value arguments will always be in contravariant positions, while method result types will always be in covariant positions. This is why you can't have a covariant type in a method value parameter position in your example. Upvotes: 0
2018/03/17
744
2,883
<issue_start>username_0: Getting started with a droplet on Digital Ocean, so far it's been a fun process. Now I want to begin migrating my site, so I set everything up and created an `index.php` to where to my root dir as a "construction" page for now. I'm setting up WordPress in a sub-dir and moved all of the contents via `all-in-one WP Migration` which has always worked perfectly for me. I've given every dir and file inside of `root/public_html` (WordPress installation) `rwx` and atm ownership has been changed to `www-data`. So I got to loadup my site which atm would be something like `/public\_html` and I log in and I am propmted with the `wp-admin` screen. Everything is working, media files are uploaded, theme is there etc. When I go to my home page, all the images load but I immediately notice the admin bar at the top, the wp-glyphs aren't loading and I'm getting those weird squares when your computer/phone doesn't know what it's trying to look at. Also any front-end page other than my home returns a 404. My homepage again is accessed by `/public\_html` so you would think my `about` page would be `/public\_html/about`, which is where WordPress is sending me to when clicked, but I receive a 404. I've spent probably 4-5 hours now trying to figure this out and eventually turned to a guide [that walks you through it](http://www.wpbeginner.com/wp-tutorials/how-to-install-wordpress-in-a-subdirectory-step-by-step/) but nada so far. Can anyone offer a suggestion? EDIT ---- I'd like to add that I followed WordPress's instructions on modifying the .htaccess file. I'm assuming it goes in the subdirectory correct? `.htaccess` ----------- ``` RewriteEngine On RewriteBase /public\_html/ RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST\_FILENAME} !-f RewriteCond %{REQUEST\_FILENAME} !-d RewriteRule . /public\_html/index.php [L] ```<issue_comment>username_1: Are you able to access wp-admin If yes then go to dashboard->setting->permalinks and update the permalink and the problem solved Upvotes: 0 <issue_comment>username_2: Access your dashboard by logging in wp-admin then access settings then permanent links, Change them to any of the provided styles save the change it again to your preferred one. This will work well. Upvotes: -1 <issue_comment>username_3: Just as a reference for myself and anyone who might stumble upon this. Solution -------- The answer was that apache was not configured correctly to read `.htaccess` files. If you are experiencing the same problems and have set up your own VPS, go to `apache.conf`, `etc/apache2/apache.conf` and find the correct directive that corresponds with your website directory. Inside you should see an option that reads `AllowOverride None`. Change this to `AllowOverride All`. If this is not the solution for you, at least it's another step out of the way. Upvotes: 3 [selected_answer]
2018/03/17
1,298
4,337
<issue_start>username_0: I was trying to update my Android Studio 2.2.3 to version 3 but when i did, below error blocked the build process : > > Unable to resolve dependency for ':app@debug/compileClasspath': Could not resolve com.android.support:multidex:1.0.3. > > > > > > > Could not resolve com.android.support:multidex:1.0.3. > > Required by: > > project :app > > > > > > > > > > > > Could not resolve com.android.support:multidex:1.0.3. > > > > > > > > > > > > > > > > > Could not parse POM <https://dl.google.com/dl/android/maven2/com/android/support/multidex/1.0.3/multidex-1.0.3.pom> > > > > > > > > > > > > > > > > > > > > > > Already seen doctype. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > **build.gradle(Module: app) :** ``` buildscript { repositories { jcenter() flatDir { dirs 'libs' } } dependencies { classpath 'com.android.tools.build:gradle:3.0.1' } } apply plugin: 'com.android.application' repositories { jcenter() flatDir { dirs 'libs' } google() } android { compileSdkVersion 26 buildToolsVersion '26.0.2' useLibrary 'org.apache.http.legacy' defaultConfig { applicationId "My.App.Id" minSdkVersion 17 targetSdkVersion 26 versionCode 111 versionName "1.1.1" testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner" multiDexEnabled true } dexOptions { javaMaxHeapSize "4g" } compileOptions { sourceCompatibility JavaVersion.VERSION_1_7 targetCompatibility JavaVersion.VERSION_1_7 } buildTypes { release { minifyEnabled false proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro' } debug { } } } dependencies { compile fileTree(include: ['*.jar'], dir: 'libs') compile 'com.android.support:appcompat-v7:+' compile 'com.android.support:design:+' compile 'com.android.support:cardview-v7:+' compile 'com.android.support:recyclerview-v7:+' compile 'com.github.rahatarmanahmed:circularprogressview:+' compile files('libs/devicedriverslib.jar') compile files('libs/allutils.jar') compile 'com.android.support:multidex:1.0.3' } ``` **build.gradle(Project: MyProjectName)** ``` // Top-level build file where you can add configuration options common to all sub-projects/modules. buildscript { repositories { jcenter() google() } dependencies { classpath 'com.android.tools.build:gradle:3.0.1' // NOTE: Do not place your application dependencies here; they belong // in the individual module build.gradle files } } allprojects { repositories { google() jcenter() } } ``` **Installed Build Tools :** [![enter image description here](https://i.stack.imgur.com/VeiOh.png)](https://i.stack.imgur.com/VeiOh.png) **gradle-wrapper.properties :** ``` #Tue Oct 31 12:15:03 IST 2017 distributionBase=GRADLE_USER_HOME distributionPath=wrapper/dists zipStoreBase=GRADLE_USER_HOME zipStorePath=wrapper/dists distributionUrl=https\://services.gradle.org/distributions/gradle-4.1-all.zip ```<issue_comment>username_1: If you are behind a firewall or even if you are in a forbidden country, you have to bypass the restrictions via a VPN tool or add the correct exceptions in the firewall. (free proxies like Psyphon and weak stuff won't work! you might wanna consider buying a good VPN!) the problem is about correct internet connection, so double check your network and internet connection and any limitations related. Upvotes: 1 [selected_answer]<issue_comment>username_2: I encountered this problem since i had set Global Gradle settings to Offline work 1. Click **File > Settings** (on macOS, **Android Studio > Preferences**) to open the **Settings** dialog. 2. In the left pane, expand **Build, Execution, Deployment** and then click **Gradle**. 3. Under Global Gradle settings, uncheck the **Offline work** checkbox. 4. Click **Apply** or **OK** for your changes to take effect. 5. Rebuild Gradle Source: [Configure Android Studio](https://developer.android.com/studio/intro/studio-config) Upvotes: 1
2018/03/17
479
1,859
<issue_start>username_0: So I have a XF application with multiple `Label`s. I want to be able to change all my label's `FontStyle` programmatically depending on the platform in one line if possible like below instead of listing the label's name one by one. ``` if(Device.RuntimePlatform == Device.Android) { label.Style = Device.Styles.ListItemDetailTextStyle; } else if(Device.RuntimePlatform == Device.iOS){ label.Style = Device.Styles.ListItemTextStyle; } ``` Is this possible?<issue_comment>username_1: You should set style in App.xaml file. I needed to set default fonts for all Labels in application (including new classes derived from Label) ``` <Style.Triggers> <Trigger TargetType="Label" Property="FontAttributes" Value="None"> <Setter Property="FontFamily" Value="{StaticResource MainFontFamily}" /> </Trigger> <Trigger TargetType="Label" Property="FontAttributes" Value="Bold"> <Setter Property="FontFamily" Value="{StaticResource MainFontFamilyBold}" /> </Trigger> </Style.Triggers> <Setter Property="TextColor" Value="Black" /> ``` And set fonts like ``` font\_file\_name.ttf#Font Name Font Name ``` You can read more in [here](https://learn.microsoft.com/en-us/xamarin/xamarin-forms/user-interface/styles/) about styles Upvotes: 2 <issue_comment>username_2: Alternative option is to simply extend `Label`: ``` public class MyLabel : Label { public MyLabel() { if (Device.RuntimePlatform == Device.Android) { Style = Device.Styles.ListItemDetailTextStyle; } else if (Device.RuntimePlatform == Device.iOS) { Style = Device.Styles.ListItemTextStyle; } } } ``` P.S.: Styling is nicely covered in [official documentation](https://learn.microsoft.com/en-us/xamarin/xamarin-forms/user-interface/styles/). Upvotes: 1 [selected_answer]
2018/03/17
638
2,305
<issue_start>username_0: I am writing an application which has to find the sub string from a file and write this sub string into some different file. For writing into a file , I am using fputs, but someone told me to check the safer version for writing into the file. ``` while (fgets(line, MAX_LINE_LEN, fp1) != NULL) { if (pname_count < 1) { if (strstr(line, p_name)) { pname_count++; fputs(strstr(line, p_name), fp2);// danger. continue; } } //Remaining code } ``` Followed two below links, but did not get my answer exactly. [gets() and fputs() are dangerous functions?](https://stackoverflow.com/questions/8921827/c-gets-and-fputs-are-dangerous-functions) [fputs dangerous in C](https://stackoverflow.com/questions/8921827/c-gets-and-fputs-are-dangerous-functions) Can someone explain what is the vulnerability with "fputs" in terms of safety.? Since fwrite takes the number of characters to write into the file, does this make fwrite more safer than fputs?<issue_comment>username_1: No risk. As long as your string is properly terminated, nothing bad can happen. however here: ``` fputs(strstr(line, p_name), fp2); ``` the result of `strstr` can theorically be `NULL`, so you could crash your program trying to write `NULL`. No risk of code injection here, though. Upvotes: 2 <issue_comment>username_2: I don't believe that `fputs` is dangerous in any inherent way. I believe [that other question](https://stackoverflow.com/questions/8921827/c-gets-and-fputs-are-dangerous-functions) was mistaken for suggesting that it was. You asked if `fwrite` was safer than `fputs` because of the explicit count. I don't think we can say so; I believe this cuts both ways: * `fputs` deals in null-terminated strings, so if you accidentally pass a string that's not properly null-terminated, `fputs` will sail off the end, and weird things may happen. And programmers certainly do make this mistake. * But it's just about as easy to accidentally pass the wrong count to `fwrite`. In particular, if you're writing (what you think is) a null-terminated string, you're probably going to call `fwrite(str, 1, strlen(str), fp)`, which is obviously going to have exactly the same problem. Upvotes: 3
2018/03/17
1,708
6,462
<issue_start>username_0: I have a register user route which takes `name` , `email` and `password`. It works perfectly fine if the data is correct i.e. unique email and params are present, but if the user is already registered then Laravel sends auto error message in its own format. I want return format to be consistent in case of success or failure. Successful Register return data: ``` { "status": "success", "token": "<KEY>" } ``` But in case of error it sends data in other format. ``` {"message":"The given data was invalid.","errors":{"email":["The email has already been taken."]}} ``` I want both of them to be consistent. Success return data is fine. But i want to customize data if failure occurs. Something like this: ``` {"status":"error","message":"The given data was invalid.","errors":{"email":["The email has already been taken."]}} ``` Basically, I need `status` param to be coming with every response. Also, I had one query while using Postman the output was pure HTML when error occurred the HTML page was default Laravel Page on the other hand when angular sends the same request the error is json format which i just pasted above. Since angular is getting JSON respose in any case it is fine for me. But why didn't postman showed me that response. Register Controller: ``` public function register(RegisterRequest $request) { $newUser = $this->user->create([ 'name' => $request->get('name'), 'email' => $request->get('email'), 'password' => <PASSWORD>($request->get('password')) ]); if (!$newUser) { return response()->json(['status'=>'error','message'=>'failed_to_create_new_user'], 500); } return response()->json([ 'status' => 'success', 'token' => $this->jwtauth->fromUser($newUser) ]); } ``` Register Request Handler: ``` public function authorize() { return true; } /** * Get the validation rules that apply to the request. * * @return array */ public function rules() { return [ 'name' => 'required', 'email' => 'required | email | unique:users,email', 'password' => '<PASSWORD>' ]; } ```<issue_comment>username_1: If I understand you correctly, you always get the error-response without the 'status' key. What happens with your current code, are a couple of things: * RegisterController@register(RegisterRequest $request) is called by a route * Laravel sees you use the RegisterRequest class as an argument, and will instantiate this class for you. * Instantiating this class means it will directly validates the rules. * If the rules are not met, laravel directly responds with the errors found. * This response will always be in laravel's default 'layout' and the code stops there. Conclusion: Your code is not even triggered when your validation rules are not met. I've looked into a solution and came up with this: ``` public function register(Illuminate\Http\Request $request) { //Define your validation rules here. $rules = [ 'name' => 'required', 'email' => 'required | email | unique:users,email', 'password' => '<PASSWORD>' ]; //Create a validator, unlike $this->validate(), this does not automatically redirect on failure, leaving the final control to you :) $validated = Illuminate\Support\Facades\Validator::make($request->all(), $rules); //Check if the validation failed, return your custom formatted code here. if($validated->fails()) { return response()->json(['status' => 'error', 'messages' => 'The given data was invalid.', 'errors' => $validated->errors()]); } //If not failed, the code will reach here $newUser = $this->user->create([ 'name' => $request->get('name'), 'email' => $request->get('email'), 'password' => <PASSWORD>($request->get('password')) ]); //This would be your own error response, not linked to validation if (!$newUser) { return response()->json(['status'=>'error','message'=>'failed_to_create_new_user'], 500); } //All went well return response()->json([ 'status' => 'success', 'token' => $this->jwtauth->fromUser($newUser) ]); } ``` Now, not conforming your validation rules still triggers an error, but your error, and not laravel's built-in error :) I hope it helps! Upvotes: 3 [selected_answer]<issue_comment>username_2: This is what i came up with: ``` function validate(array $rules) { $validator = Validator::make(request()->all(), $rules); $errors = (new \Illuminate\Validation\ValidationException($validator))->errors(); if ($validator->fails()) { throw new \Illuminate\Http\Exceptions\HttpResponseException(response()->json( [ 'status' => false, 'message' => "Some fields are missing!", 'error_code' => 1, 'errors' => $errors, ], \Illuminate\Http\JsonResponse::HTTP_UNPROCESSABLE_ENTITY)); } } ``` Create a helper directory (`App\Helpers`) and add it into a file. don't forget to add that into your composer.json ``` "autoload": { "files": [ "app/Helpers/system.php", ], }, ``` Now you can call `validate()` in your controllers and get what you want: ``` validate([ 'email' => 'required|email', 'password' => '<PASSWORD>', 'remember' => 'nullable|boolean', 'captcha' => 'prod_required|hcaptcha', ]); ``` Upvotes: 0 <issue_comment>username_3: In Laravel 8 I added my custom invalidJson with the "success": false: in app/Exceptions/Handler.php: ``` /** * Convert a validation exception into a JSON response. * * @param \Illuminate\Http\Request $request * @param \Illuminate\Validation\ValidationException $exception * @return \Illuminate\Http\JsonResponse */ protected function invalidJson($request, ValidationException $exception) { return response()->json([ 'success' => false, 'message' => $exception->getMessage(), 'errors' => $exception->errors(), ], $exception->status); } ``` Upvotes: 1
2018/03/17
1,413
4,810
<issue_start>username_0: I am trying to make a Horizontal UIPickerView. But i have som issues with the width. As you can se in the top of the image above[![enter image description here](https://i.stack.imgur.com/Gbfl2.jpg)](https://i.stack.imgur.com/Gbfl2.jpg) I want the width of the PickerView to be = view.frame.size.width, but for some reason it keeps very small. In my code i have: ``` let cityPicker = UIPickerView() var rotationAngle: CGFloat! let width:CGFloat = 100 let height:CGFloat = 100 override func viewDidLoad() { super.viewDidLoad() cityPicker.delegate = self cityPicker.dataSource = self cityPicker.frame = CGRect(x: 0, y: 0, width: view.frame.width, height: 50) self.view.addSubview(cityPicker) rotationAngle = -90 * (.pi/180) cityPicker.transform = CGAffineTransform(rotationAngle: rotationAngle) } func numberOfComponents(in pickerView: UIPickerView) -> Int { return 1 } func pickerView(_ pickerView: UIPickerView, numberOfRowsInComponent component: Int) -> Int { return byer.count } func pickerView(_ pickerView: UIPickerView, rowHeightForComponent component: Int) -> CGFloat { return 100 } func pickerView(_ pickerView: UIPickerView, widthForComponent component: Int) -> CGFloat { return 100 } func pickerView(_ pickerView: UIPickerView, viewForRow row: Int, forComponent component: Int, reusing view: UIView?) -> UIView { let view = UIView() view.frame = CGRect(x: 0, y: 0, width: width, height: height) let label = UILabel() label.frame = CGRect(x: 0, y: 0, width: width, height: height) label.textAlignment = .center label.font = UIFont (name: "Baskerville-Bold", size: 30) label.text = byer[row] view.addSubview(label) view.transform = CGAffineTransform(rotationAngle: 90 * (.pi/180)) return view } ``` Can anyone see what i am doing wrong?<issue_comment>username_1: You should just use autolayout, but if you want manual layout you cannot use your view's frame in `viewDidLoad`. At that point the frame will equal what is in the storyboard or nib file, and will not reflect the actual device or view hierarchy. You should put your manual sizing logic in `viewDidLayoutSubviews`. When this function is called your view has the correct size and you can use it to set your subviews. Upvotes: 2 <issue_comment>username_2: ``` var rotationAngle : CGFloat! ``` > > In your ViewDidLoad > > > ``` rotationAngle = -90 * (.pi / 180 ) ``` > > write this function in your class > > > ``` func rotatePickerView(pickerView : UIPickerView) { let y = pickerView.frame.origin.y let x = pickerView.frame.origin.x pickerView.transform = CGAffineTransform(rotationAngle: rotationAngle) pickerView.frame = CGRect(x: x, y: y, width: pickerView.frame.height , height: pickerView.frame.width) } ``` > > Also we need to rotate the label as we are rotating our pickerview, Now in your viewForRow method. (These are Pickerview delegate methods.) > > > ``` let label = UILabel() label.font = UIFont(name: "Helvetica", size: 40) label.font = UIFont.systemFont(ofSize: 40, weight: .bold) label.minimumScaleFactor = 0.5 label.textAlignment = .center label.textColor = UIColor.white label.transform = CGAffineTransform(rotationAngle: 90 * (.pi / 180 )) //Put your values in an array like Minutes,Temperature etc. label.text = minutes[row] return label ``` > > Call the rotatePickerView function in your viewDidLoad and put your pickerview as a parameter input, Done, Thank you ;) > > > Upvotes: 0 <issue_comment>username_3: I don't know what @username_1 meant by "just use autolayout", but I had to create a rotated picker ``` private let horizontalPickerView: UIPickerView = { let view = UIPickerView() view.translatesAutoresizingMaskIntoConstraints = false view.showsSelectionIndicator = false view.transform = CGAffineTransform(rotationAngle: -90 * (.pi / 180)) return view }() ``` then set its constraints to empirically calculated ``` horizontalPickerView.widthAnchor.constraint(equalToConstant: screenWidth), horizontalPickerView.heightAnchor.constraint(equalToConstant: 300), horizontalPickerView.centerXAnchor.constraint(equalTo: view.centerXAnchor), horizontalPickerView.centerYAnchor.constraint(equalTo: mainButton.topAnchor, constant: -32) ``` and also set items width (actually height) in UIPickerViewDelegate method ``` public func pickerView(_ pickerView: UIPickerView, rowHeightForComponent component: Int) -> CGFloat { 100 } ``` to get this [![enter image description here](https://i.stack.imgur.com/GLCsI.gif)](https://i.stack.imgur.com/GLCsI.gif) Upvotes: 2
2018/03/17
1,414
4,217
<issue_start>username_0: Here is my code: ``` func dump(w io.Writer, val interface{}) error { je := json.NewEncoder(w) return je.Encode(val) } type example struct { Name string Func func() } func main() { a := example{ Name: "Gopher", Func: func() {}, } err := dump(os.Stdout, a) if err != nil { panic(err) } } ``` The program will panic with `json: unsupported type: func()` My question is, how can I encode ANYTHING into json, ignoring those the encode cannot handle. For example, the above data structure, I want the output to be: `{"Name": "Gopher"}` IMPORTANT: for the dump funtion, its value is an `interface{}`, i.e. don't know what kind of data it will receive, so tricks like `json:"-"` is **not** what I want. If in case that the data passed to dump() is not marshal-able, e.g. `dump(func(){})`, it is perfectly acceptable to just return empty string. **EDIT after two years:** The answer provided by @TehSphinX has this piece of code which I want to avoid: ``` func (s example) MarshalJSON() ([]byte, error) { return json.Marshal(struct { Name string }{ Name: s.Name, }) } ``` the reason is: in `func dump(w io.Writer, val interface{}) error`, the **`val`** is ANY code, i.e. code NOT written by me, could be any open source code, or even data types I don't have access to its source code.<issue_comment>username_1: JSON tags (the correct way) --------------------------- Is this what you are looking for: [playground](https://play.golang.org/p/ELjv0U7hHFA) You can use field tags to give json instructions on how to handle your struct. See also [encoding/json](https://golang.org/pkg/encoding/json/) for more info on the options you have. -- edit -- It does not matter that val is of type `interface{}`. `json.Marshal()` will reflect on it anyway and find out what type it is. It is the programmers job to set the correct `json` tags on all structs he/she wants to `dump`. Custom Marshal function (another here not necessary way) -------------------------------------------------------- You can also write a custom MarshalJSON function to do marshalling of every type however you like: [custom marshalling playground](https://play.golang.org/p/PCFm_nFIrUM) Your own json Marshaller (have it your way) ------------------------------------------- If you don't like the way golang handles json marshalling you could always write your own marshalling function doing all the reflection youself. Upvotes: 3 <issue_comment>username_2: You can use [json.RawMessage](https://golang.org/pkg/encoding/json/#RawMessage). It can be used to delay JSON decoding or precompute a JSON encoding. Upvotes: 0 <issue_comment>username_3: If you just want to log the entire struct, you can do this. ``` func dump(w io.Writer, val interface{}) error { je := json.NewEncoder(w) return je.Encode(val) } type example struct { Name string Func func() } func main() { a := example{ Name: "Gopher", Func: func() {}, } type _example struct { example Func interface{} } err := dump(os.Stdout, _example{example: a}) if err != nil { panic(err) } } ``` This is an example of logging the entire http.Request. ``` type _Request struct { *http.Request GetBody interface{} Cancel interface{} } j, err := json.MarshalIndent(_Request{Request: r}, "", " ") ``` Upvotes: 0 <issue_comment>username_4: To Quote the comment from <NAME> > > If your goal is data inspection and debugging, then the spew package may be of help. > > > Spew is wow-cool!! <https://github.com/davecgh/go-spew> part of dump of an HTTP request: ``` (*x509.Certificate)(0xc00016c100)({ Raw: ([]uint8) (len=947 cap=960) { 00000000 30 82 03 af 30 82 02 97 a0 03 02 01 02 02 10 08 |0...0...........| 00000010 3b e0 56 90 42 46 b1 a1 75 6a c9 59 91 c7 4a 30 |;.V.BF..uj.Y..J0| 00000020 0d 06 09 2a 86 48 86 f7 0d 01 01 05 05 00 30 61 |...*.H........0a| 00000030 31 0b 30 09 06 03 55 04 06 13 02 55 53 31 15 30 |1.0...U....US1.0| 00000040 13 06 03 55 04 0a 13 0c 44 69 67 69 43 65 72 74 |...U....DigiCert| ``` Upvotes: 0
2018/03/17
1,662
4,283
<issue_start>username_0: I have a docker-compose-cli.yaml file defining 5 services: ``` Starting peer0.org1.example.com ... Starting peer1.org2.example.com ... Starting orderer.example.com ... Starting peer0.org2.example.com ... Starting peer0.org1.example.com ... ``` Running the command ``` docker-compose -f docker-compose-cli.yaml up -d ``` Result in: ``` Creating peer1.org2.example.com ... error Creating peer0.org1.example.com ... Creating peer1.org2.example.com ... Creating peer0.org2.example.com ... Creating peer0.org1.example.com ... error Creating peer1.org1.example.com ... ERROR: for peer1.org2.example.com Cannot start service peer1.org2.example.com: driver failed programming external connectivity on endpoint peer1.org2.example.com (9Creating peer0.org2.example.com ... error r ERROR: for peer0.org1.example.com Cannot start service peer0.org1.example.com: driver failed programming external connectivity on endpoint peer0.org1.example.com (aCreating orderer.example.com ... error Creating peer1.org1.example.com ... error 9b374fc4dc75a62ccdf3b1e5e99c87996941547e67da0adc45958f20d464501): Error starting userland proxy: mkdir /port/tcp:0.0.0.0:9053:tcp:172.18.0.4:7053: input/output error ERROR: for orderer.example.com Cannot start service orderer.example.com: driver failed programming external connectivity on endpoint orderer.example.com (fca3fa13c583f0e7b4411476d1519d09826522da796585312e5a34417b0bc5ed): Error starting userland proxy: mkdir /port/tcp:0.0.0.0:7050:tcp:172.18.0.5:7050: input/output error ERROR: for peer1.org1.example.com Cannot start service peer1.org1.example.com: driver failed programming external connectivity on endpoint peer1.org1.example.com (160f6ca079292769e9b24ddad849c9282c56eed975eb5a287d1431b335cced8a): Error starting userland proxy: mkdir /port/tcp:0.0.0.0:8053:tcp:172.18.0.2:7053: input/output error ERROR: for peer1.org1.example.com Cannot start service peer1.org1.example.com: driver failed programming external connectivity on endpoint peer1.org1.example.com (160f6ca079292769e9b24ddad849c9282c56eed975eb5a287d1431b335cced8a): Error starting userland proxy: mkdir /port/tcp:0.0.0.0:8053:tcp:172.18.0.2:7053: input/output error ERROR: for peer1.org2.example.com Cannot start service peer1.org2.example.com: driver failed programming external connectivity on endpoint peer1.org2.example.com (9c22e5e2029edb8faab4297d6f94c03fc302ed9d55436f7b1282824e9d26dbd6): Error starting userland proxy: mkdir /port/tcp:0.0.0.0:10053:tcp:172.18.0.2:7053: input/output error ERROR: for orderer.example.com Cannot start service orderer.example.com: driver failed programming external connectivity on endpoint orderer.example.com (fca3fa13c583f0e7b4411476d1519d09826522da796585312e5a34417b0bc5ed): Error starting userland proxy: mkdir /port/tcp:0.0.0.0:7050:tcp:172.18.0.5:7050: input/output error ERROR: for peer0.org1.example.com Cannot start service peer0.org1.example.com: driver failed programming external connectivity on endpoint peer0.org1.example.com (a89f0f67fd4fbfc62cfc9ec1b44b587bbb332c27bc4a1dd7f05e67cf17317d37): Error starting userland proxy: mkdir /port/tcp:0.0.0.0:7053:tcp:172.18.0.3:7053: input/output error ERROR: for peer0.org2.example.com Cannot start service peer0.org2.example.com: driver failed programming external connectivity on endpoint peer0.org2.example.com (99b374fc4dc75a62ccdf3b1e5e99c87996941547e67da0adc45958f20d464501): Error starting userland proxy: mkdir /port/tcp:0.0.0.0:9053:tcp:172.18.0.4:7053: input/output error ERROR: Encountered errors while bringing up the project. ``` I tried to remove the containers using docker-compose -f docker-compose-cli.yaml down docker-compose -f docker-compose-cli.yaml stop but nothing worked.Do you know why? Also I think when i run the command up it creates the containers but the services are probably not working.<issue_comment>username_1: This looks similar to <https://github.com/docker/for-win/issues/1038> The solution seems to be to restart Docker. Upvotes: 3 [selected_answer]<issue_comment>username_2: Just turn off the Experimental feature in docker settings and everything should work as expected. [![enter image description here](https://i.stack.imgur.com/Byd9Z.png)](https://i.stack.imgur.com/Byd9Z.png) Upvotes: 0
2018/03/17
1,526
5,014
<issue_start>username_0: [UPDATED]: Split the code into 2 functions as suggested by @Ashton to make things a bit simpler. I am a noob with flutter and I am trying to do some really simple things with flutter i.e. building a login page with either firebase login or google login. I face errors with both and I am not able to find out what the reason is for it. I have firebase setup and my GoogleServices-Info.plist file setup for ios development. The function doing the authentication for me as below. ``` Future \_googleSignIn() async { GoogleSignInAccount user = googleSignIn.currentUser; if (user == null) user = await googleSignIn.signInSilently(); if (user == null) { user = await googleSignIn.signIn(); } return user; } ``` If I use the Google Sign in then I get the following error. ``` [VERBOSE-2:dart_error.cc(16)] Unhandled exception: type '_Future' is not a subtype of type 'Future' where \_Future is from dart:async Future is from dart:async Null is from dart:core #0 GoogleSignIn.\_callMethod (package:google\_sign\_in/google\_sign\_in.dart) #1 GoogleSignIn.\_addMethodCall. (package:google\_sign\_in/google\_sign\_in.dart:196:28) #2 \_RootZone.run (dart:async/zone.dart:1376:54) #3 \_FutureListener.handleWhenComplete (dart:async/future\_impl.dart:151:18) #4 \_Future.\_propagateToListeners.handleWhenCompleteCallback (dart:async/future\_impl.dart:603:39) #5 \_Future.\_propagateToListeners (dart:async/future\_impl.dart:659:37) #6 \_Future.\_addListener. (dart:async/future\_impl.dart:342:9) #7 \_microtaskLoop (dart:async/schedule\_microtask.dart:41:21) #8 \_startMicrotaskLoop (dart:async/schedule\_microtask.dart:50:5) ``` However, if I call the firebase simple auth system, then if the user doesn't exist and I try to use the createUserWithEmailAndPassword then the following error comes back. ``` Future \_firebaseAuthSignIn(String email, String password) async { FirebaseUser firebaseUser; try { firebaseUser = await auth.signInWithEmailAndPassword( email: email, password: <PASSWORD>); final userid = FirebaseAuth.instance.currentUser; } catch (exception) { firebaseUser = await auth.createUserWithEmailAndPassword( email: email, password: password); } return firebaseUser; } ``` Although, this creates the user in firebase, the function createUserWithEmailAndPassword causes the exception below.. ``` [VERBOSE-2:dart_error.cc(16)] Unhandled exception: type '_InternalLinkedHashMap' is not a subtype of type 'Map' where \_InternalLinkedHashMap is from dart:collection Map is from dart:core String is from dart:core #0 FirebaseAuth.createUserWithEmailAndPassword (package:firebase\_auth/firebase\_auth.dart) #1 \_ensureLoggedIn (package:remy/auth/login.dart:28:41) #2 LoginPageState.\_handleSubmit (package:remy/auth/login.dart:64:11) #3 LoginPageState.build. (package:remy/auth/login.dart:132:41) #4 \_InkResponseState.\_handleTap (package:flutter/src/material/ink\_well.dart:478:14) #5 \_InkResponseState.build. (package:flutter/src/material/ink\_well.dart:530:30) #6 GestureRecognizer.invokeCallback (package:flutter/src/gestures/recognizer.dart:102:24) #7 TapGestureRecognizer.\_checkUp (package:flutter/src/gestures/tap.dart:161:9) #8 TapG<…> ``` My initial reaction was that since these errors look like type inconsistencies, the package versions were not latest. However, I double checked and my versions are upto date.. google\_sign\_in: 3.0.0 firebase\_auth: 0.5.1 Any help in the right direction would be really helpful. Thanks<issue_comment>username_1: > > type '\_Future' is not a subtype of type 'Future' > > > In this case, you don't have a return statement in the `use_google == true` block. Try `return user;` which is of type [Future](https://github.com/flutter/plugins/blob/master/packages/google_sign_in/lib/google_sign_in.dart#L237-L240) > > type '\_InternalLinkedHashMap' is not a subtype of type 'Map' > > > I'm not exactly sure here. Perhaps it's the format of the input, how it's handled in the exception, or maybe the setup of firebase\_auth in general. I recommend you split up the two different sign-in methods so you can return the appropriate type from each method, and so you can narrow your focus and see if that helps you uncover the issue. Upvotes: 0 <issue_comment>username_2: I believe you should change the return type from: ``` Future \_googleSignIn() async ``` to ``` Future \_googleSignIn() async ``` Upvotes: 0 <issue_comment>username_3: I ran into this also. I think this might be a bug in version `3.0`. I downgraded to `2.1.2` e.g. `pubspec.yaml` ```yaml dependencies: flutter: sdk: flutter # ... google_sign_in: 2.1.2 ``` And the issue disappear Upvotes: 2 <issue_comment>username_4: **for the error:** type '\_Future' is not a subtype of type 'Future' **Quick fix:** In Android studio, Settings -> flutter -> Run application in Dart 2.0 mode "Disable dart 2" I am not sure why dart 2 causes this error; hoping someone can help understand the issue. Upvotes: 1
2018/03/17
1,024
3,344
<issue_start>username_0: I am trying to solve a problem of making a sum and group by query in Prometheus on a metric where the labels assigned to the metric values to unique to my sum and group by requirements. I have a metric sampling sizes of ElasticSearch indices, where the index names are labelled on the metric. The indices are named like this and are placed in the label "index": `project...` with concrete value that would look like this: `project.sample-x.ad19f880-2f16-11e7-8a64-jkzdfaskdfjk.2018.03.12` `project.sample-y.jkcjdjdk-1234-11e7-kdjd-005056bf2fbf.2018.03.12` `project.sample-x.ueruwuhd-dsfg-11e7-8a64-kdfjkjdjdjkk.2018.03.11` `project.sample-y.jksdjkfs-2f16-11e7-3454-005056bf2fbf.2018.03.11` so if I had the short version of values in the "index" label I would just do: `sum(metric) by (index)` but what I am trying to do is something like this: `sum(metric) by ("project.")` where I can group by a substring of the "index" label. How can this be done with a Prometheus query? I assume this could maybe be solved using a label\_replace as part of the group, but I can't just see how to "truncate" the label value to achieve this. Best regards <NAME><issue_comment>username_1: It looks as if the `index` label naming convention is wrong. These constituents should clearly be *separate metric labels* instead. So, instead of `project...` you should be storing timeseries with labels as in: `project{projectname="xxx", uniqueid="yyy"}` As for the `date` part, I assume this already covered by the sample timestamps themselves? There are plenty of functions like `timestamp()`, `month()`, `day()` etc to manipulate the timestamp. So you have two options: * fix the metrics exporter to separate the *labels* instead of concatenating this information into the metric name itself, * OR, if you can't alter the exporter, use *metric relabeling* to convert the `index` label into new time series with the label set I described above. You can then use standard Prometheus functions as you described. See [this article](https://www.robustperception.io/extracting-full-labels-from-consul-tags/) for an example of how you could do this. Upvotes: 1 <issue_comment>username_2: While it'd be best to fix the metrics, the next best thing is to use `metric_relabel_configs` using the same technique as [this blog post](https://www.robustperception.io/extracting-labels-from-legacy-metric-names/): ``` metric_relabel_configs: - source_labels: [index] regex: 'project\.([^.]*)\..*' replacement: '${1}' target_label: project ``` You will then have a `project` label that you can use as usual. Upvotes: 4 [selected_answer]<issue_comment>username_3: It is possible to use [label\_replace()](https://docs.victoriametrics.com/MetricsQL.html#label_replace) function in order to extract the needed parts of the label into a separate label and then group by this label when summing the results. For example, the following query extracts the `project.sample-y` from `project.sample-y.jksdjkfs-2f16-11e7-3454-005056bf2fbf.2018.03.11` value stored in the `index` label and puts the extracted value into `project_name` label. Then the `sum()` is grouped by `project_name` label values: ``` sum( label_replace(metric, "project_name", "$1", "index", "(project[.][^.]+).+") ) by (project_name) ``` Upvotes: 2
2018/03/17
1,663
6,176
<issue_start>username_0: I have a WooCommerce store, which is connected with Zapier to a Google spreadsheet. In this file, I keep track of the sales etc. Some of these columns contain -obviously- prices, such as price ex VAT, etc. However, for some reason the pricing values are stored in my spreadsheet as strings, such as `18.21`. To be able to automatically calculate with these values, I need to convert values in these specific columns to numbers with a comma as divider. I'm new to Google Script, but with reading some other post etc, I managed to "write" the following script, which almost does the job: ``` function stringIntoNumber() { var sheetActive = SpreadsheetApp.openById("SOME_ID"); var sheet = sheetActive.getSheetByName("SOME_SHEETNAME"); var range = sheet.getRange("R2:R"); range.setValues(range.getValues().map(function(row) { return [row[0].replace(".", ",")]; })); } ``` The script works fine as long as only values with a dot can be found in column R. When values that belong to the range are changed to values with a comma, the script gives the error: > > TypeError, can't find the function Replace. > > ><issue_comment>username_1: The error occurs because `.replace` is a string method and can't be applied to numbers. A simple workaround would be to ensure the argument is always a string, there is a `.toString()` method for that. in your code try ``` return [row[0].toString().replace(".", ",")]; ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: The locale of your spreadsheet is set to a country that uses commas to seperate decimal places. Zapier however seems to use dots and therefore google sheets interprets the data it gets from Zapier as strings since it can't interpret it as valid numbers. If you change the locale to `United States` (under `File/Spreadsheet settings`) it should work correctly. But you may not want to do that because it can cause other issues. You got a TypeError because the type was `number` and not `string`. You can use an if statement to check the type before calling `replace`. Also you should convert the type to 'number' to make sure it will work correctly independent of your locale setting. ``` range.setValues(range.getValues().map(function(row) { if(typeof row[0] === "string") return [Number(row[0].replace(",", "."))]; else return row; })); ``` In this case I convert `,` to `.` instead of the other way around since the conversion to `number` requires a `.`. Upvotes: 1 <issue_comment>username_3: * Select the column you want to change. * **Goto** Edit>Find and Replace * In Find area put "." * in Replace with area put "," Upvotes: 4 <issue_comment>username_4: Click on `Tools > Script Editor`. Put this on your `macros.gs` (create one if you don't have any): ``` /** @OnlyCurrentDoc */ function ReplaceCommaToDot() { var range = SpreadsheetApp.getActiveRange(); var col = range.getColumn(); var row = range.getRow(); function format(str) { if(str.length == 0) return str; return str.match(/[0-9.,]+/)[0] .replace('.','') .replace(',','.'); } var log = [range.getRow(), range.getColumn()]; Logger.log(log); var values = range.getValues() for(var row = 0; row < range.getNumRows(); row++){ for(var col = 0; col < range.getNumColumns(); col++){ values[row][col] = format(values[row][col]); } } range.setValues(values); } ``` Save. Go back to the spreadsheet, import this macro. Once the macro is imported, just select the desired range, click on `Tools > Macro` and select `ReplaceCommaToDot` Note: This script removes the original `.`, and replaces `,` by `.`. Ideal if you are converting from `US$ 9.999,99` to `9999.99`. Comma `,` and whatever other text, like the currency symbol `US$`, were removed since `Google Spreadsheet` handles it with text formatting. Alternatively one could swap `.` and `,`, like from `US$ 9.999,99` to `9,999.99` by using the following code snippet instead: ``` return str.match(/[0-9.,]+/)[0] .replace('.','_') .replace(',','.') .replace('_',','); ``` Upvotes: 0 <issue_comment>username_5: An alternative way to replace `.` with `,` is to use regex functions and conversion functions in the Sheets cells. Suppose your number is in `A1` cell, you can write this function in any new cell: ``` = IF(REGEXMATCH(TO_TEXT(A1), "."), VALUE(REGEXREPLACE(TO_TEXT(A1), ".", ",")), VALUE(A1)) ``` These functions do the following step: 1. Convert the number in the target cell to text. This should be done because `REGEXMATCH` expects a text as its argument. 2. Check if there is a `.` in the target cell. 3. If there is a `.`, replace it with `,`, and then convert the result to a number. 4. If there is no `.`, keep the text in the target cell as is, but convert it to a number. (*Note : the Google Sheets locale setting I used in applying these functions is* `United States`) Upvotes: 0 <issue_comment>username_6: I have different solution. In my case, I`m getting values from Google Forms and there it is allowed use only numbers with dot as I know. In this case when I capture data from Form and trigger script which is triggered when the form is submited. Than data is placed in specific sheet in a specific cell, but formula in sheet is not calculating, because with my locale settings calculating is possible only with a comma not dot, that is coming from Google Form. Then I use Number() to convert it to a number even if it is already set as a number in Google Forms. In this case, Google Sheets script is converting number one more time to number, but changes dot to comma because it is checking my locale. ``` var size = Number(sizeValueFromForm); ``` I have not tested this with different locale, so I can`t guarantee that will work for locale where situation is opposite to mine. I hope this helps someone. I was looking for solution here, but remembered that some time ago I had similar problem, and tried this time too and it works. Upvotes: 0 <issue_comment>username_7: ``` =IF(REGEXMATCH(TO_TEXT(F24);"[.]");REGEXREPLACE(F24;"[.]";",");VALUE(F24)) ``` Works for me If find dot replace with comma if not, put value Upvotes: 0
2018/03/17
1,515
5,549
<issue_start>username_0: I have a requirement to generate a unique number (ARN) in this format ``` DD/MM/YYYY/1, DD/MM/YYYY/2 ``` and insert these in elastic search index. The approach i am thinking of is to create an auto increment field in the doc and use it to generate a new entry and use the new auto generated number to create the ARN and update the doc. doc structure that i am planning to use: ``` { id: 1, arn: 17/03/2018/01 } ``` something like this. How can i get auto increment field in elastic search?<issue_comment>username_1: The error occurs because `.replace` is a string method and can't be applied to numbers. A simple workaround would be to ensure the argument is always a string, there is a `.toString()` method for that. in your code try ``` return [row[0].toString().replace(".", ",")]; ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: The locale of your spreadsheet is set to a country that uses commas to seperate decimal places. Zapier however seems to use dots and therefore google sheets interprets the data it gets from Zapier as strings since it can't interpret it as valid numbers. If you change the locale to `United States` (under `File/Spreadsheet settings`) it should work correctly. But you may not want to do that because it can cause other issues. You got a TypeError because the type was `number` and not `string`. You can use an if statement to check the type before calling `replace`. Also you should convert the type to 'number' to make sure it will work correctly independent of your locale setting. ``` range.setValues(range.getValues().map(function(row) { if(typeof row[0] === "string") return [Number(row[0].replace(",", "."))]; else return row; })); ``` In this case I convert `,` to `.` instead of the other way around since the conversion to `number` requires a `.`. Upvotes: 1 <issue_comment>username_3: * Select the column you want to change. * **Goto** Edit>Find and Replace * In Find area put "." * in Replace with area put "," Upvotes: 4 <issue_comment>username_4: Click on `Tools > Script Editor`. Put this on your `macros.gs` (create one if you don't have any): ``` /** @OnlyCurrentDoc */ function ReplaceCommaToDot() { var range = SpreadsheetApp.getActiveRange(); var col = range.getColumn(); var row = range.getRow(); function format(str) { if(str.length == 0) return str; return str.match(/[0-9.,]+/)[0] .replace('.','') .replace(',','.'); } var log = [range.getRow(), range.getColumn()]; Logger.log(log); var values = range.getValues() for(var row = 0; row < range.getNumRows(); row++){ for(var col = 0; col < range.getNumColumns(); col++){ values[row][col] = format(values[row][col]); } } range.setValues(values); } ``` Save. Go back to the spreadsheet, import this macro. Once the macro is imported, just select the desired range, click on `Tools > Macro` and select `ReplaceCommaToDot` Note: This script removes the original `.`, and replaces `,` by `.`. Ideal if you are converting from `US$ 9.999,99` to `9999.99`. Comma `,` and whatever other text, like the currency symbol `US$`, were removed since `Google Spreadsheet` handles it with text formatting. Alternatively one could swap `.` and `,`, like from `US$ 9.999,99` to `9,999.99` by using the following code snippet instead: ``` return str.match(/[0-9.,]+/)[0] .replace('.','_') .replace(',','.') .replace('_',','); ``` Upvotes: 0 <issue_comment>username_5: An alternative way to replace `.` with `,` is to use regex functions and conversion functions in the Sheets cells. Suppose your number is in `A1` cell, you can write this function in any new cell: ``` = IF(REGEXMATCH(TO_TEXT(A1), "."), VALUE(REGEXREPLACE(TO_TEXT(A1), ".", ",")), VALUE(A1)) ``` These functions do the following step: 1. Convert the number in the target cell to text. This should be done because `REGEXMATCH` expects a text as its argument. 2. Check if there is a `.` in the target cell. 3. If there is a `.`, replace it with `,`, and then convert the result to a number. 4. If there is no `.`, keep the text in the target cell as is, but convert it to a number. (*Note : the Google Sheets locale setting I used in applying these functions is* `United States`) Upvotes: 0 <issue_comment>username_6: I have different solution. In my case, I`m getting values from Google Forms and there it is allowed use only numbers with dot as I know. In this case when I capture data from Form and trigger script which is triggered when the form is submited. Than data is placed in specific sheet in a specific cell, but formula in sheet is not calculating, because with my locale settings calculating is possible only with a comma not dot, that is coming from Google Form. Then I use Number() to convert it to a number even if it is already set as a number in Google Forms. In this case, Google Sheets script is converting number one more time to number, but changes dot to comma because it is checking my locale. ``` var size = Number(sizeValueFromForm); ``` I have not tested this with different locale, so I can`t guarantee that will work for locale where situation is opposite to mine. I hope this helps someone. I was looking for solution here, but remembered that some time ago I had similar problem, and tried this time too and it works. Upvotes: 0 <issue_comment>username_7: ``` =IF(REGEXMATCH(TO_TEXT(F24);"[.]");REGEXREPLACE(F24;"[.]";",");VALUE(F24)) ``` Works for me If find dot replace with comma if not, put value Upvotes: 0
2018/03/17
1,527
5,583
<issue_start>username_0: I am trying to update update column based on combination of all columns. Table A ``` NUM_1 NUM_2 NUM_3 Name ----- ----- ----- ---- 1 4 6 Test1 4 4 5 Test2 4 4 3 Test3 ``` Table B ``` NUM_1 NUM_2 NUM_3 Name ----- ----- ----- ---- 1 4 6 Final_1 4 4 5 Final_2 4 4 3 Final_3 ``` If three columns NUM1,NUM2,NUM 3 matches then I need to update Name in Table A with value from Table B. Is there any simple script using any correlated query or another thing?<issue_comment>username_1: The error occurs because `.replace` is a string method and can't be applied to numbers. A simple workaround would be to ensure the argument is always a string, there is a `.toString()` method for that. in your code try ``` return [row[0].toString().replace(".", ",")]; ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: The locale of your spreadsheet is set to a country that uses commas to seperate decimal places. Zapier however seems to use dots and therefore google sheets interprets the data it gets from Zapier as strings since it can't interpret it as valid numbers. If you change the locale to `United States` (under `File/Spreadsheet settings`) it should work correctly. But you may not want to do that because it can cause other issues. You got a TypeError because the type was `number` and not `string`. You can use an if statement to check the type before calling `replace`. Also you should convert the type to 'number' to make sure it will work correctly independent of your locale setting. ``` range.setValues(range.getValues().map(function(row) { if(typeof row[0] === "string") return [Number(row[0].replace(",", "."))]; else return row; })); ``` In this case I convert `,` to `.` instead of the other way around since the conversion to `number` requires a `.`. Upvotes: 1 <issue_comment>username_3: * Select the column you want to change. * **Goto** Edit>Find and Replace * In Find area put "." * in Replace with area put "," Upvotes: 4 <issue_comment>username_4: Click on `Tools > Script Editor`. Put this on your `macros.gs` (create one if you don't have any): ``` /** @OnlyCurrentDoc */ function ReplaceCommaToDot() { var range = SpreadsheetApp.getActiveRange(); var col = range.getColumn(); var row = range.getRow(); function format(str) { if(str.length == 0) return str; return str.match(/[0-9.,]+/)[0] .replace('.','') .replace(',','.'); } var log = [range.getRow(), range.getColumn()]; Logger.log(log); var values = range.getValues() for(var row = 0; row < range.getNumRows(); row++){ for(var col = 0; col < range.getNumColumns(); col++){ values[row][col] = format(values[row][col]); } } range.setValues(values); } ``` Save. Go back to the spreadsheet, import this macro. Once the macro is imported, just select the desired range, click on `Tools > Macro` and select `ReplaceCommaToDot` Note: This script removes the original `.`, and replaces `,` by `.`. Ideal if you are converting from `US$ 9.999,99` to `9999.99`. Comma `,` and whatever other text, like the currency symbol `US$`, were removed since `Google Spreadsheet` handles it with text formatting. Alternatively one could swap `.` and `,`, like from `US$ 9.999,99` to `9,999.99` by using the following code snippet instead: ``` return str.match(/[0-9.,]+/)[0] .replace('.','_') .replace(',','.') .replace('_',','); ``` Upvotes: 0 <issue_comment>username_5: An alternative way to replace `.` with `,` is to use regex functions and conversion functions in the Sheets cells. Suppose your number is in `A1` cell, you can write this function in any new cell: ``` = IF(REGEXMATCH(TO_TEXT(A1), "."), VALUE(REGEXREPLACE(TO_TEXT(A1), ".", ",")), VALUE(A1)) ``` These functions do the following step: 1. Convert the number in the target cell to text. This should be done because `REGEXMATCH` expects a text as its argument. 2. Check if there is a `.` in the target cell. 3. If there is a `.`, replace it with `,`, and then convert the result to a number. 4. If there is no `.`, keep the text in the target cell as is, but convert it to a number. (*Note : the Google Sheets locale setting I used in applying these functions is* `United States`) Upvotes: 0 <issue_comment>username_6: I have different solution. In my case, I`m getting values from Google Forms and there it is allowed use only numbers with dot as I know. In this case when I capture data from Form and trigger script which is triggered when the form is submited. Than data is placed in specific sheet in a specific cell, but formula in sheet is not calculating, because with my locale settings calculating is possible only with a comma not dot, that is coming from Google Form. Then I use Number() to convert it to a number even if it is already set as a number in Google Forms. In this case, Google Sheets script is converting number one more time to number, but changes dot to comma because it is checking my locale. ``` var size = Number(sizeValueFromForm); ``` I have not tested this with different locale, so I can`t guarantee that will work for locale where situation is opposite to mine. I hope this helps someone. I was looking for solution here, but remembered that some time ago I had similar problem, and tried this time too and it works. Upvotes: 0 <issue_comment>username_7: ``` =IF(REGEXMATCH(TO_TEXT(F24);"[.]");REGEXREPLACE(F24;"[.]";",");VALUE(F24)) ``` Works for me If find dot replace with comma if not, put value Upvotes: 0
2018/03/17
841
3,456
<issue_start>username_0: I'm new in CakePHP 3.5. Please answer me nicely because I really need help in solving this problem or at least give an idea on how to solve this. I've read almost the same question but most of the answers they provide are already deprecated. My problem is I want to display in index the transactions that are reserves. I try this query in client transaction controller ``` public function index() { $this->paginate = [ 'contain' => ['Clients'] ]; $clientTransaction = $this->paginate($this->ClientTransaction); $clientTransaction = $this->ClientTransaction->find() ->where(['status' => 0]); $this->set(compact('clientTransaction')); } ``` [This the screenshot of the error in index.ctp](https://i.stack.imgur.com/09M6g.png) It is right that I code it in a controller or it is better to place that query code in model?<issue_comment>username_1: The result of your `paginate` call is being thrown away and overwritten with the result of `find` call, because you're using the same variable name. There's an example in [the manual](https://book.cakephp.org/3.0/en/controllers/components/pagination.html) of passing a query object to the paginator. In your case, it would look something like this: ``` $clientTransaction = $this->ClientTransaction->find() ->where(['status' => 0]); $clientTransaction = $this->paginate($clientTransaction); ``` Upvotes: 2 <issue_comment>username_2: The answer is pretty clear: Model. The manual also tells you "Fat models, skinny controllers". The reason is simply the [separation of concerns](https://en.wikipedia.org/wiki/Separation_of_concerns) in different parts of the application. A controller doesn't need to know anything about the model and it's logic. A controller just needs to deal with the request and use it to delegate its data to some other object that implements the actual business logic. This makes it easier to change and test and exchange or share parts of an application. If you want to go a step further and do proper [SoC](https://en.wikipedia.org/wiki/Separation_of_concerns) it would be actually Controller, Service, Model/Table. For example [custom finders](https://book.cakephp.org/3.0/en/orm/retrieving-data-and-resultsets.html#custom-finder-methods) should go into a table, any DB (or whatever kind of repository the model implements) should happen in the model. So in your case implement whatever find conditions you need in your model as custom finder then call it in the controller: ``` $query = $this->MyTable->find('customFinder'); $this->set('results', $this->paginate($query)); ``` In the case you need to pass args to the query I usually implement a separate method (findIndex & getIndexQuery($queryParams)) that takes the arguments, for example the URLs query params and calls the custom find and returns the query object. If you use a service layer, this method would go into your service layer because it implements the business logic. The service class, a custom made class, not part of the CakePHP framework, would implement the real business logic and do any kind of calculations or whatever else logic operations need to be done and pass the result back to the controller which would then pass that result to the view. This ensures that each part of the code is easy to test and exchange. For example the service is as well useable in a shell app because it doesn't depend on the controller. Upvotes: 1
2018/03/17
802
2,607
<issue_start>username_0: I'm having a trouble with css trying to remove a text after < span> tag to append other text. how can i remove this ``` [yesi](#) / (this slash needs to be removed with css) ```<issue_comment>username_1: you mean you want to hide that text? Then you can a class `.hideme {display: none;}` If you wanna remove the text you need to do it with java script there are so many ways to do that. Can you post the whole code with the ending `p` tag? Upvotes: -1 <issue_comment>username_2: ``` Original Text ``` Using the CSS display property, you can hide the content inside the tag, and then attach a pseudo-element with your new content to the tag. You do not need to use absolute positioning here because display: none; completely removes the element from the page, and it does affect the layout. It’s as if the text inside the element never existed. **Here’s the CSS:** ``` .replaced span { display: none; } .replaced:after { content: "This text replaces the original."; } ``` [For More INfo click here](https://blog.escapecreative.com/how-to-replace-text-with-css/) **HTML :** ``` [yesi](#) / (this slash needs to be removed with css) ``` **CSS** ``` .ll { display: none; } .replaced:after { content: "This text replaces the original."; } ``` [jsfiddle](https://jsfiddle.net/3r8vv8so/7/) Upvotes: 1 <issue_comment>username_1: just use this css lines: .entry-meta { font-size: 0; } .entry-meta .entry-author { font-size: 12px !important; } Upvotes: 0 <issue_comment>username_3: CSS cannot delete text, text must be deleted manually or programmatically by JavaScript. CSS is presentation so it can hide text. Off the top of my head there 4 ways to go about it and they all involve hiding the contents of `p.entry-meta` and revealing the contents of `span.entry-author`. The following demo has 4 Examples. Demo ---- ```css /* Example A - alpha/opacity */ .entry-metaA {color:rgba(0,0,0,0)} .entry-authorA {color:rgba(0,0,0,1)} /**/ /* Example B - transparency */ .entry-metaB {color:transparent} .entry-authorB {color:black} /**/ /* Example C - font-size */ .entry-metaC {font-size:0} .entry-authorC {font-size:16px} /* Props to username_1 */ /* Example D - visible children elements */ .entry-metaD {visibility: hidden} .entry-authorD {visibility: visible} /**/ ``` ```html [yesi](#) / (this slash needs to be removed with css) [yesi](#) / (this slash needs to be removed with css) [yesi](#) / (this slash needs to be removed with css) [yesi](#) / (this slash needs to be removed with css) ``` Upvotes: 0
2018/03/17
832
3,134
<issue_start>username_0: I have a big problem. I wrote script in jQuery which gets data from webservice by posing JSON and webservice by response sends me back data also in JSON. The code looks like that: ``` function Product(date_from,date_to,API_KEY) { var self = this; self.productURI = 'https://api.xxxxxxxxxxxxxxxxxxx/DailySales?FromDate='+date_from+' 00:01 :00&ToDate='+date_to+' 00:01:00'; self.products = new Array(); self.productsDiv = "#products"; self.getAllProducts = function () { var req = self.pobierz_dane_ajax(self.productURI, "GET"); req.done(function (data) { self.products = data; var dataString = JSON.stringify(data); $.ajax({ type: 'POST', url: 'write_transactions.php', data: { save_transaction: dataString, date_from: date_from } }); }); } self.pobierz_dane_ajax = function (uri, method, data) { var request = { url: uri, type: method, contentType: "application/json", accepts: "application/json", dataType: 'json', data: JSON.stringify(data), crossDomain: true, beforeSend: function (xhr) { xhr.setRequestHeader("Authorization", "Basic " + API_KEY); } }; return $.ajax(request); } ``` } I call my function like that:<issue_comment>username_1: If I understand your problem correctly you currently * call a webpage via browser * the webpage fetch data from an external api with JS (ajax) * you parse the response and send it per POST to an additional PHP script (saves the data to database) now you call the webpage from cron with wget but it does not work The reason is, wget just fetch the page and display the HTML response, but does not run the JS-script! Instead of wget you could use [phantomjs](http://phantomjs.org/). It basically acts as a headless browser and can for example be called from cron Another possible solutions could be to rewrite some of your code and do everything in the php script directly * PHP Script fetch data from API * Same Script save data to database * cronjob regulary call the php script with wget Edit - suggesting solution for php api call ``` php $options = array( 'http' = array( 'method' => 'GET', 'header' => array( 'Authorization: Basic ' . $API_KEY, 'Content-type: application/json' ) ) ); $context = stream_context_create($options); $response = file_get_contents($API_URL, false, $context); //work with response / save to db ``` Upvotes: 1 <issue_comment>username_2: Helll Yeah! It was so easy :D I am posting my solution for others :D ``` $remote_url = 'https://api.xxxxxxxxxxxxxx'; // Create a stream $opts = array( 'http'=>array( 'method'=>"GET", 'header' => "Authorization: Basic xxxxxxxxxxxxxxxxxxxxxxxxxxxxx" ) ); $context = stream_context_create($opts); // Open the file using the HTTP headers set above $file = file_get_contents($remote_url, false, $context); print($file); ``` Upvotes: 0
2018/03/17
541
2,150
<issue_start>username_0: I want to know is that possible to have one laravel app installed but works differently in 2 directories or no? purpose ======= i want have my blogging application in `domain.com` and my store application in `store.domain.com` basically is one laravel application and different installed packages for each directory but authentication and roles are the same.<issue_comment>username_1: If I understand your problem correctly you currently * call a webpage via browser * the webpage fetch data from an external api with JS (ajax) * you parse the response and send it per POST to an additional PHP script (saves the data to database) now you call the webpage from cron with wget but it does not work The reason is, wget just fetch the page and display the HTML response, but does not run the JS-script! Instead of wget you could use [phantomjs](http://phantomjs.org/). It basically acts as a headless browser and can for example be called from cron Another possible solutions could be to rewrite some of your code and do everything in the php script directly * PHP Script fetch data from API * Same Script save data to database * cronjob regulary call the php script with wget Edit - suggesting solution for php api call ``` php $options = array( 'http' = array( 'method' => 'GET', 'header' => array( 'Authorization: Basic ' . $API_KEY, 'Content-type: application/json' ) ) ); $context = stream_context_create($options); $response = file_get_contents($API_URL, false, $context); //work with response / save to db ``` Upvotes: 1 <issue_comment>username_2: Helll Yeah! It was so easy :D I am posting my solution for others :D ``` $remote_url = 'https://api.xxxxxxxxxxxxxx'; // Create a stream $opts = array( 'http'=>array( 'method'=>"GET", 'header' => "Authorization: Basic xxxxxxxxxxxxxxxxxxxxxxxxxxxxx" ) ); $context = stream_context_create($opts); // Open the file using the HTTP headers set above $file = file_get_contents($remote_url, false, $context); print($file); ``` Upvotes: 0
2018/03/17
886
2,710
<issue_start>username_0: I am trying to run an angular web app. I have cloned confirmed working project, but when I run `pub get` I get following error: ``` /usr/lib/dart/bin/pub get Resolving dependencies... Got dependencies! Precompiling dependencies... Loading source assets... Loading angular, sass_builder and dart_to_js_script_rewriter transformers... Error on line 26, column 5 of ../../.pub-cache/hosted/pub.dartlang.org/angular_components-0.8.0/pubspec.yaml: Error loading transformer: 'package:sass_builder/sass_builder.dart': malformed type: line 34 pos 20: cannot resolve class 'Logger' from 'SassBuilder' final _log = new Logger('sass_builder'); ^ outputExtension: .scss.css ^^^^^^^^^^^^^^^^^^^^^^^^^^^ Process finished with exit code 0 ``` My `pub.yaml`: ``` name: ZwinnexD description: A web app that is a web app version: 0.21.37 #homepage: https://www.example.com #author: <NAME> environment: sdk: '>=1.24.0 <2.0.0' dependencies: angular: ^4.0.0 angular\_components: ^0.8.0 firebase: ^4.0.0 dev\_dependencies: angular\_test: ^1.0.0 browser: ^0.10.0 dart\_to\_js\_script\_rewriter: ^1.0.1 test: ^0.12.0 transformers: - angular: entry\_points: - web/main.dart - test/\*\*\_test.dart - test/pub\_serve: $include: test/\*\*\_test.dart - dart\_to\_js\_script\_rewriter # Uncomment the following in sdk 1.24+ to make pub serve # use dartdevc (webdev.dartlang.org/tools/dartdevc). #web: # compiler: # debug: dartdevc ``` I am using dart 1.24.3. I have tried removing pub-cache, doing cache repair, even reinstalling Dart. Nothing helps. I work on Ubuntu 16.04. Please help. I can also confirm that basic projects from WebStorm don't work either. I am not sure what can I do beside uninstalling and installing dart...<issue_comment>username_1: So far I have figured out that editing `SassBuilder` class in pub-cache fixes the issue. You just need to change import of logger from this ``` import 'package:logging/logging.dart'; ``` to this: ``` import 'package:logging/logging.dart' as Log; ``` and also change logger constructor from this: ``` final _log = new Logger('sass_builder'); ``` to this: ``` final _log = new Log.Logger('sass_builder'); ``` I have no idea for better solution right now. Upvotes: 3 <issue_comment>username_2: It was issue in sass-builder. To fix it, use `sass_builder: ^1.1.5` or above in *pubspec.yaml*. Here is [pull request](https://github.com/dart-league/sass_builder/pull/31) with fix. Upvotes: 3 [selected_answer]<issue_comment>username_3: Meanwhile, you can add `sass 1.0.0-beta.5.3` to `pubspec.yaml` and run `pub get` ``` dependency_overrides: sass: 1.0.0-beta.5.3 ``` Upvotes: 0
2018/03/17
1,163
3,554
<issue_start>username_0: I want to insert a result that return by this query: ``` WITH rows AS ( SELECT *, ROW_NUMBER() OVER (ORDER BY [発生時刻]) AS rn FROM [PROC_MN].[dbo].[TBL_FINISH_STATUS] where PO_NO='GV12762' and 発生時刻 BETWEEN '2018/03/16' AND '2018/03/18' AND [加工内容]='Bonding' ) SELECT DATEDIFF(minute, mc.[発生時刻], mp.[発生時刻]) FROM rows mc JOIN rows mp ON mc.rn = mp.rn - 1 ``` The resut is : 91 I use this query but cannot, please help! ``` IF OBJECT_ID('tempdb..#tempTest') IS NOT NULL DROP TABLE #tempTest Insert into #tempTest WITH rows AS ( SELECT *, ROW_NUMBER() OVER (ORDER BY [発生時刻]) AS rn FROM [PROC_MN].[dbo].[TBL_FINISH_STATUS] where PO_NO='GV12762' and 発生時刻 BETWEEN '2018/03/16' AND '2018/03/18' AND [加工内容]='Bonding' ) SELECT DATEDIFF(minute, mc.[発生時刻], mp.[発生時刻]) FROM rows mc JOIN rows mp ON mc.rn = mp.rn - 1 ``` EDITED: This one work for me ``` If(OBJECT_ID('tempdb..#temp') Is Not Null) Begin Drop Table #Temp End create table #Temp ( OptTime int ) ;WITH rows AS ( SELECT *, ROW_NUMBER() OVER (ORDER BY [発生時刻]) AS rn FROM [PROC_MN].[dbo].[TBL_FINISH_STATUS] where PO_NO='GV12762' and 発生時刻 BETWEEN '2018/03/16' AND '2018/03/18' AND [加工内容]='Bonding' ) INSERT INTO #Temp SELECT DATEDIFF(minute, mc.[発生時刻], mp.[発生時刻]) FROM rows mc JOIN rows mp ON mc.rn = mp.rn - 1 ```<issue_comment>username_1: Your syntax is wrong - if you want to insert from a CTE, do this: ``` ; WITH rows AS ( SELECT *, ROW_NUMBER() OVER (ORDER BY [発生時刻]) AS rn FROM [PROC_MN].[dbo].[TBL_FINISH_STATUS] WHERE PO_NO = 'GV12762' AND 発生時刻 BETWEEN '2018/03/16' AND '2018/03/18' AND [加工内容] = 'Bonding' ) INSERT INTO #tempTest SELECT (list of columns) FROM rows WHERE (conditions) ``` See [the official Microsoft Docs](https://learn.microsoft.com/en-us/sql/t-sql/queries/with-common-table-expression-transact-sql) for details on the CTE syntax and how to use it **Update:** if that target temp table doesn't exist yet - use this syntax: ``` ; WITH rows AS ( ..... ) SELECT (list of columns) INSERT INTO #tempTest FROM rows WHERE (conditions) ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: Not only is the syntax wrong, but the logic is not very good either. Just use `lag()`: ``` INSERT INTO #tempTest SELECT DATEDIFF(minute, LAG(fs.[発生時刻]) OVER (ORDER BY [発生時刻]), fs.[発生時刻]) FROM [PROC_MN].[dbo].[TBL_FINISH_STATUS] fs WHERE PO_NO = 'GV12762' AND 発生時刻 >= '2018-03-16' AND 発生時刻 < '2018-03-18' AND [加工内容] = 'Bonding'; ``` Notes: * Use ISO/ANSI standard syntax for dates. That is a hyphen rather than a slash. * Don't use `BETWEEN` with dates, particularly if they have times. You can refer to this [very helpful blog](https://sqlblog.org/2011/10/19/what-do-between-and-the-devil-have-in-common) post by <NAME>. * If you use the appropriate SQL functionality, your queries will be simpler. * If you want to exclude `NULL`s you can use a subquery or CTE. Upvotes: 2
2018/03/17
720
2,110
<issue_start>username_0: I have a image with following dockerfile: ``` FROM ubuntu:14.04 # others here ``` Then I do follows: ``` docker pull user/prj:1.0 ``` After somedays, I update my prj to `2.0`. To save disk, I do as follows: ``` docker rmi user/prj:1.0 docker pull user/prj:1.1 ``` But I found the new pull will still cost a lot of time. What's best praticse to quick this process?<issue_comment>username_1: Your syntax is wrong - if you want to insert from a CTE, do this: ``` ; WITH rows AS ( SELECT *, ROW_NUMBER() OVER (ORDER BY [発生時刻]) AS rn FROM [PROC_MN].[dbo].[TBL_FINISH_STATUS] WHERE PO_NO = 'GV12762' AND 発生時刻 BETWEEN '2018/03/16' AND '2018/03/18' AND [加工内容] = 'Bonding' ) INSERT INTO #tempTest SELECT (list of columns) FROM rows WHERE (conditions) ``` See [the official Microsoft Docs](https://learn.microsoft.com/en-us/sql/t-sql/queries/with-common-table-expression-transact-sql) for details on the CTE syntax and how to use it **Update:** if that target temp table doesn't exist yet - use this syntax: ``` ; WITH rows AS ( ..... ) SELECT (list of columns) INSERT INTO #tempTest FROM rows WHERE (conditions) ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: Not only is the syntax wrong, but the logic is not very good either. Just use `lag()`: ``` INSERT INTO #tempTest SELECT DATEDIFF(minute, LAG(fs.[発生時刻]) OVER (ORDER BY [発生時刻]), fs.[発生時刻]) FROM [PROC_MN].[dbo].[TBL_FINISH_STATUS] fs WHERE PO_NO = 'GV12762' AND 発生時刻 >= '2018-03-16' AND 発生時刻 < '2018-03-18' AND [加工内容] = 'Bonding'; ``` Notes: * Use ISO/ANSI standard syntax for dates. That is a hyphen rather than a slash. * Don't use `BETWEEN` with dates, particularly if they have times. You can refer to this [very helpful blog](https://sqlblog.org/2011/10/19/what-do-between-and-the-devil-have-in-common) post by <NAME>. * If you use the appropriate SQL functionality, your queries will be simpler. * If you want to exclude `NULL`s you can use a subquery or CTE. Upvotes: 2
2018/03/17
1,294
4,077
<issue_start>username_0: I want to make an `X` button (close button) of the `MAIN WINDOW` unable when the modal window is opened. I tried to find the answer what I was looking for, but there weren't the answer that I wanted. I really emphasise that I want to make close button of `MAIN WINDOW` unable, NOT modal window (tkinter window). I really looking for the answer. Thanks. Here's my code ``` from tkinter import * from tkinter import messagebox class Gameover(object): def start(self): self.root = Tk() self.root.title('Gameover') self.lbl = Label(self.root, text = 'Enter your name') self.lbl.grid(row = 0) self.usertext = StringVar() self.myentry = Entry(self.root, textvariable = self.usertext) self.myentry.grid(row = 1, column = 0) self.lbl = Label(self.root, text = 'Your name's length must be shorter than 9 letters') self.lbl.grid(row = 2) self.mybutton = Button(self.root, text = 'OK', command = self.check_length, width = 10, height = 2) self.mybutton.grid(row = 1, column = 1) self.counter = 0 self.root.mainloop() def check_length(self): if len(self.usertext.get()) < 10: self.printMessage() else: self.errorbox() def errorbox(self): messagebox.showinfo("Erorr", 'Your name's length must be shorter than 9 letters') def printMessage(self): global the_score, FILE_DATA data = [self.usertext.get(), str(the_score)] FILE_DATA.insert(len(FILE_DATA) - 1, data) FILE_DATA[len(FILE_DATA) - 2][1] = int(FILE_DATA[len(FILE_DATA) - 2][1]) for i in range(0, len(FILE_DATA) - 1): for j in range(0, len(FILE_DATA) - 2 - i): if FILE_DATA[j][1] < FILE_DATA[j + 1][1]: poped_data = FILE_DATA[j] del FILE_DATA[j] FILE_DATA.insert(j + 1, poped_data) file = open('score.txt', 'w', encoding = 'utf-8') for i in range(len(FILE_DATA) - 1): file.write('{} {}\n'.format(FILE_DATA[i][0], FILE_DATA[i][1])) file.close() sys.exit() ``` There's more code than this, but I think I don't need to write every code on here. I want to make close button of MAIN WINDOW when this modal window (tkinter window) is opened.<issue_comment>username_1: Your syntax is wrong - if you want to insert from a CTE, do this: ``` ; WITH rows AS ( SELECT *, ROW_NUMBER() OVER (ORDER BY [発生時刻]) AS rn FROM [PROC_MN].[dbo].[TBL_FINISH_STATUS] WHERE PO_NO = 'GV12762' AND 発生時刻 BETWEEN '2018/03/16' AND '2018/03/18' AND [加工内容] = 'Bonding' ) INSERT INTO #tempTest SELECT (list of columns) FROM rows WHERE (conditions) ``` See [the official Microsoft Docs](https://learn.microsoft.com/en-us/sql/t-sql/queries/with-common-table-expression-transact-sql) for details on the CTE syntax and how to use it **Update:** if that target temp table doesn't exist yet - use this syntax: ``` ; WITH rows AS ( ..... ) SELECT (list of columns) INSERT INTO #tempTest FROM rows WHERE (conditions) ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: Not only is the syntax wrong, but the logic is not very good either. Just use `lag()`: ``` INSERT INTO #tempTest SELECT DATEDIFF(minute, LAG(fs.[発生時刻]) OVER (ORDER BY [発生時刻]), fs.[発生時刻]) FROM [PROC_MN].[dbo].[TBL_FINISH_STATUS] fs WHERE PO_NO = 'GV12762' AND 発生時刻 >= '2018-03-16' AND 発生時刻 < '2018-03-18' AND [加工内容] = 'Bonding'; ``` Notes: * Use ISO/ANSI standard syntax for dates. That is a hyphen rather than a slash. * Don't use `BETWEEN` with dates, particularly if they have times. You can refer to this [very helpful blog](https://sqlblog.org/2011/10/19/what-do-between-and-the-devil-have-in-common) post by <NAME>. * If you use the appropriate SQL functionality, your queries will be simpler. * If you want to exclude `NULL`s you can use a subquery or CTE. Upvotes: 2
2018/03/17
1,148
4,138
<issue_start>username_0: I have Excel vba macro to open specific Outlook inbox subfolders but the Outlook sub-folder opens in a new Outlook window. This has the result that if i don't manually close the new window, then after running the macro several times I have multiple Outlook windows open. How can I modify the code so that Outlook simply navigates to the required subfolder in it's existing single window instead, please? Currently the new Outlook window opens on top of my Excel spreadsheet. When running the macro, I would like the current instance of Outlook to go to the required subfolder and for it to apprear on top of the Excel spreadsheet. I did not write the code below. I have incorporated the code below into a larger macro but the rest is irrelevant. ``` Dim MyOutLookApp As Object Dim MyNameSpace As Object Dim MyFolder As Object Dim This As Variant '// Late Binding Set MyOutLookApp = CreateObject("Outlook.Application") '''I have also tried using = GetObject(, "Outlook.Application").... no change Set MyNameSpace = MyOutLookApp.GetNamespace("MAPI") On Error GoTo ErrFlder Set MyFolder = MyNameSpace.GetDefaultFolder(olFldr.olFolderInbox) Set MyFolder = MyFolder.Folders Set This = MyFolder Set This = MyFolder(strJobName) '''strJobname is a string picked up from elsewhere. It is simply the name of the inbox subfolder I want to go to. This.display End Sub ```<issue_comment>username_1: From my answer to [another question](https://stackoverflow.com/a/49030096/8112776): (a [few](https://stackoverflow.com/search?q=user%3A8112776%20outlook), actually) > > When working with applications such as Excel it's important to make > sure the application object is properly `.Quit` / `.Close`'d when > finished with them, (and to `Set` all objects to `Nothing`), otherwise > there's a risk of inadvertently having multiple instances running, > which can lead to memory leaks, which leads to crashes and potential > data loss. > > > To check if there is an existing instance of Outlook, use this > function: > > > > ``` > Function IsOutlookOpen() > 'returns TRUE if Outlook is running > > Dim olApp As Outlook.Application > > On Error Resume Next > Set olApp = GetObject(, "Outlook.Application") > On Error GoTo 0 > > If olApp Is Nothing Then > IsOutlookOpen= False > Else > IsOutlookOpen= True > End If > > End Function > > ``` > > More about opening a **new vs. existing instances** of Office applications from the Source: **[<NAME>](https://www.rondebruin.nl/win/s1/outlook/openclose.htm)** Upvotes: 1 <issue_comment>username_2: *If you simply want to navigate to the subfolder in it's existing single window then work with [Application.ActiveExplorer Method (Outlook)](https://msdn.microsoft.com/en-us/vba/outlook-vba/articles/application-activeexplorer-method-outlook) with [CurrentFolder Property](https://msdn.microsoft.com/en-us/library/office/aa211818(v=office.11).aspx)* *The following example with activate the **`SubFolder`** name **"Temp"** under **`Inbox`** in same window application* ``` Option Explicit Public Sub Example() Dim olApp As Outlook.Application Dim olNS As Outlook.Namespace Dim Inbox As Outlook.MAPIFolder Dim SubFolder As Outlook.MAPIFolder '// Ref to Outlook Inbox Set olApp = New Outlook.Application Set olNS = olApp.GetNamespace("MAPI") Set Inbox = olNS.GetDefaultFolder(olFolderInbox) Set SubFolder = Inbox.Folders("Temp") If Not SubFolder Is Nothing Then Set olApp.ActiveExplorer.CurrentFolder = SubFolder Else MsgBox "SubFolder Not Found", vbInformation End If End Sub ``` > > *[MSDN: Early and Late Binding](https://learn.microsoft.com/en-us/dotnet/visual-basic/programming-guide/language-features/early-late-binding/)* > > > Upvotes: 0 <issue_comment>username_3: Your code always calls `MAPIFolder.Display`, so you get a new Explorer window every time. Do the following instead ``` if MyOutLookApp.ActiveExplorer Is Nothing Then This.Display Else set MyOutLookApp.ActiveExplorer.CurrentFolder = This End If ``` Upvotes: 0
2018/03/17
1,405
4,213
<issue_start>username_0: I'm following this tutorial trying to set up kafka, <https://kafka.apache.org/quickstart> And I get this error when doing the part on zookeeper. It certainly has something to do with the config, but I don't understand to what extend. ``` bin/zookeeper-server-start.sh config/server.properties [2018-03-17 03:27:11,623] INFO Reading configuration from: config/server.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) [2018-03-17 03:27:11,628] ERROR Invalid config, exiting abnormally (org.apache.zookeeper.server.quorum.QuorumPeerMain) org.apache.zookeeper.server.quorum.QuorumPeerConfig$ConfigException: Error processing config/server.properties at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:154) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:101) at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:78) Caused by: java.lang.NumberFormatException: For input string: "initial.rebalance.delay.ms" at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) at java.lang.Long.parseLong(Long.java:589) at java.lang.Long.parseLong(Long.java:631) at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parseProperties(QuorumPeerConfig.java:242) at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:150) ... 2 more Invalid config, exiting abnormally ```<issue_comment>username_1: If you look closely, you'll notice that the command to start Zookeeper is: ``` bin/zookeeper-server-start.sh config/zookeeper.properties ``` Using the `zookeeper.properties` file. It looks like you copied the command to start Kafka which is using `server.properties`. Upvotes: 6 [selected_answer]<issue_comment>username_2: Try to remove **config/** , and use the file directly. e.g. bin/zookeeper-server-start.sh zookeeper.properties Upvotes: -1 <issue_comment>username_3: You are using following to start the zookeeper: ``` bin/zookeeper-server-start.sh config/server.properties ``` Zookeeper configuration settings are placed in zookeeper.properties file. So, to start zookeeper: ``` bin/zookeeper-server-start.sh config/zookeeper.properties ``` And for starting KAFKA server: ``` bin/kafka-server-start.sh config/server.properties ``` Keep in mind which file comes with which server. Upvotes: 1 <issue_comment>username_4: I am facing same issue because I am directly giving step2 command in my terminal,which is wrong. We have to run command of step2 in our Kafka folder. Follow below steps to run zookeeper successfully. Step1: Goto your kafka folder or simply do cd $KAFKA\_HOME STEP2: bin/zookeeper-server-start.sh ./config/zookeeper.properties Upvotes: 0 <issue_comment>username_5: I fixed this on Windows by doing below. Make sure to specify full paths. First, go to the `bin\windows` folder of your kafka installation: ``` C:\kafka_2.12-2.0.0\bin\windows ``` Then, run the following command: ``` .\zookeeper-server-start.bat C:\kafka_2.12-2.0.0\config\zookeeper.properties ``` When done, you will see the following output: ``` [2020-07-06 20:25:13,703] INFO Server environment:user.dir=C:\kafka_2.12-2.0.0\bin\windows (org.apache.zookeeper.server.ZooKeeperServer) [2020-07-06 20:25:13,716] INFO tickTime set to 3000 (org.apache.zookeeper.server.ZooKeeperServer) [2020-07-06 20:25:13,717] INFO minSessionTimeout set to -1 (org.apache.zookeeper.server.ZooKeeperServer) [2020-07-06 20:25:13,717] INFO maxSessionTimeout set to -1 (org.apache.zookeeper.server.ZooKeeperServer) [2020-07-06 20:25:13,787] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) [2020-07-06 20:25:13,791] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) ``` Upvotes: 4 <issue_comment>username_6: 1. check for "dataDir=/tmp/zookeeper1", if not present then create a directory and one file inside this directory myid ``` mkdir -p /tmp/zookeeper1 chmod -R 777 /tmp/zookeeper1 vi /tmp/zookeeper1/myid (add value as 1, this is server-id from server.1=) ``` Upvotes: 0
2018/03/17
767
2,932
<issue_start>username_0: ``` #include #include using namespace std; void test\_bad\_alloc(unsigned char \*start) { try { vector v = vector(start + 1, start - 13); }catch (bad\_alloc) { std::cout<<"bad alloc"< ``` This code is similar to my real project, I found it will result in bad\_alloc exception, Why? Is the constructor of this vector ambiguous?<issue_comment>username_1: The constructor called by `vector(ptr1, ptr2)` is the range constructor with two iterators. It tries to allocate memory for elements starting at ptr1 and ending before ptr2. But as your ptr2 is smaller than ptr1 you would have a negative number of elements allocated. As `ptr2=start-13` would reference outside of the array allocated, I guess you wanted `ptr2=start+13`. Upvotes: 0 <issue_comment>username_2: Your program has undefined behaviour the moment it tries to evaluate `start - 13`. `start` points to the first element of the `a` array, so `start - 13` creates an invalid pointer, because it's out of the array's bounds. You simply cannot do that, and it doesn't make sense either. What do you expect the pointer to point at, exactly? Mind that [this is undefined behaviour](https://stackoverflow.com/questions/10473573/why-is-out-of-bounds-pointer-arithmetic-undefined-behaviour) even if you do not dereference the pointer. In any case, undefined behaviour means your program can do anything, and "anything" includes throwing random exceptions. And in fact, I cannot reproduce the `std::bad_alloc` exception. For example, the latest Clang throws `std::length_error` instead, and the latest Visual C++ just crashes. It also depends on the compiler flags. What probably happens in your case is that the compiler has produced binary code which simply goes on with the invalid pointer and passes it to the vector constructor, which then tries to create a huge integer range from it, which fails. Perhaps you intended something like this instead? ``` void test_bad_alloc(unsigned char *start, unsigned char* end) { try { vector v = vector(start + 1, end - 13); }catch (bad\_alloc) { std::cout<<"bad alloc"< ``` --- I also couldn't help but notice a Java-ism in the vector definition line. You can, and should, write it like this instead: ``` vector v(start + 1, end - 13); ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: Issue is in `vector v = vector(start + 1, start - 13);` Range Constructor expects ``` vector (InputIterator first, InputIterator last, const allocator_type& alloc = allocator_type()); ``` i.e. first element to last element. However in place of `SecondIterator` you are using `start - 13` which is outside the memory space of allocated array. In best case it may be also outside memory space of your process and can cause segmentation fault and in worst case it may get garbage values, overwrite files with bad data or sending wrong messages over sockets. Upvotes: 1
2018/03/17
1,555
5,506
<issue_start>username_0: I'm reading through Eloquent Javascript and facing one of the exercises, I found a rather odd behavior. (at least for me) the exercise asks to create a function in order to reverse an array. I thought I could loop over the array and each time pop one item from the original array and push it into a temporary array that is going to finally be returned. but as I'm looping over the array either with a for-of loop or typical sequential loop, last item is not transferred. can someone tell me what happens exactly? ``` const reverseArray = function(array) { let rev = []; for (let x = 0; x <= array.length; x++) { rev.push(array.pop()); console.log(rev, array) } return rev; }; console.log(reverseArray(["A", "B", "C"])); ``` output: ``` ["C"] ["A", "B"] ["C", "B"] ["A"] ["C", "B"] ```<issue_comment>username_1: When the `pop()` applies on `array` it decreases the length of the `array` so when the loop runs it finds one item less than the previous array length. Thus, what you can do is simply assign the length of the `array` in a variable and use that in the comparison of the `for` loop: ```js let rev = []; const reverseArray = function(array) { var length = array.length; for (let x = 0; x < length; x++) { rev.push(array.pop()); } return rev; }; console.log(reverseArray(["A", "B", "C"])); ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: Try using while loop instead of for loop ``` const reverseArray = function(array){ let rev = []; while (array.length>0) { rev.push(array.pop()); } return rev; }; console.log(reverseArray(["A", "B", "C"])); ``` Upvotes: 0 <issue_comment>username_3: As the [`pop()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/pop) method remove the *pop'ed* item from the array, use a `while` loop instead of `for` ```js let rev = []; const reverseArray = function(array) { while (array.length > 0) { // or just "(array.length)" rev.push(array.pop()); } return rev; } console.log(reverseArray(["A", "B", "C"])); ``` With `closure` you can save yourself an extra global variable ```js const reverseArray = function(array) { return (function(a_in,a_out) { while (a_in.length > 0) { // or just "(a_in.length)" a_out.push(a_in.pop()); } return a_out; })(array,[]); } console.log(reverseArray(["A", "B", "C"])); ``` Or if the `Array.reverse()` method is allowed ```js const reverseArray = function(array) { return array.reverse(); } console.log(reverseArray(["A", "B", "C"])); ``` Upvotes: 2 <issue_comment>username_4: I put two versions for you. `reverse` will reverse the array without modifying the original array. `reverseModify` will do the same, but emptying the original array. Choose the version that suits you better ```js const arr = [1, 2, 3, 4]; function reverse(array) { let result = []; for (let i = array.length - 1; i >= 0; i--) { result.push(array[i]); } return result; } function reverseModify(array) { let result = []; while (array.length) { result.push(array.pop()); } return result; } console.log(reverse(arr), arr); console.log(reverseModify(arr), arr); ``` Upvotes: 0 <issue_comment>username_5: The problem is divides into two independent operations. 1. iterate all elements, usually take the length and perform a loop 2. `pop` a value and `push that value. This is the mechanism to take the last`item and push it to the end of the new array. The part for iterating needs only the actual length and a count down (by `pop`) to zero and then exit the loop. For a count up, you need two vvariables, one for the counter and one for the end or length of the array, which is a static value, take from the given array, which is later mutating. ```js function reverseArray(array) { var rev = []; while (array.length) { rev.push(array.pop()); } return rev; } console.log(reverseArray(["A", "B", "C"])); ``` Upvotes: 0 <issue_comment>username_6: The [**pop()**](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/pop) method removes the last element of an array, and returns that element. It changes the length of an array. ```js const popArray = function(array) { for (let x = 0; x <= array.length; x++) { console.log("value of x :", x); console.log("array.length :", array.length); array.pop(); console.log("array after", x + 1, "pop :", array); } }; popArray(["A", "B", "C"]); ``` In the above `demo` we can see that everytime pop() happens on an array. `array.length` will decrease and value of `x` will be increase. Hence, At certain point of time (after 2nd iteration) value of `x` will be `greater` then the length of an array. Use **Array.from()** method : ```js let rev = []; const reverseArray = function(array) { var destinationArray = Array.from(array); for (var i in Array.from(array)) { rev.push(array.pop()); console.log(rev, array); } }; reverseArray(["A", "B", "C"]); ``` Upvotes: 1
2018/03/17
1,683
5,949
<issue_start>username_0: I can connect to my device fine, I can see console and inspects the pages, but whenever I try to open the debug tab in the WebIDE it remains blank. I connect to my android device using USB ADB, the debugger feature used to work until all of the sudden it went blank whenever I tried accessing it. I have tried reinstalling firefox on both my computer and my android device but to no avail. What could be causing this? After some looking around I found an exception that may be related in the Main Process console: ``` error occurred while processing 'listWorkers: TypeError: Ci.nsIWorkerDebugger is undefined Stack: onListWorkers@resource://devtools/shared/base-loader.js -> resource://devtools/server/actors/tab.js:48:87 onPacket@resource://devtools/shared/base-loader.js -> resource://devtools/server/main.js:141:212 _onJSONObjectReady/<@resource://devtools/shared/base-loader.js -> resource://devtools/shared/transport/transport.js:25:166 exports.makeInfallible/<@resource://devtools/shared/base-loader.js -> resource://devtools/shared/ThreadSafeDevToolsUtils.js:2:467 exports.makeInfallible/<@resource://devtools/shared/base-loader.js -> resource://devtools/shared/ThreadSafeDevToolsUtils.js:2:467 Line: 48, column: 87 ```<issue_comment>username_1: When the `pop()` applies on `array` it decreases the length of the `array` so when the loop runs it finds one item less than the previous array length. Thus, what you can do is simply assign the length of the `array` in a variable and use that in the comparison of the `for` loop: ```js let rev = []; const reverseArray = function(array) { var length = array.length; for (let x = 0; x < length; x++) { rev.push(array.pop()); } return rev; }; console.log(reverseArray(["A", "B", "C"])); ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: Try using while loop instead of for loop ``` const reverseArray = function(array){ let rev = []; while (array.length>0) { rev.push(array.pop()); } return rev; }; console.log(reverseArray(["A", "B", "C"])); ``` Upvotes: 0 <issue_comment>username_3: As the [`pop()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/pop) method remove the *pop'ed* item from the array, use a `while` loop instead of `for` ```js let rev = []; const reverseArray = function(array) { while (array.length > 0) { // or just "(array.length)" rev.push(array.pop()); } return rev; } console.log(reverseArray(["A", "B", "C"])); ``` With `closure` you can save yourself an extra global variable ```js const reverseArray = function(array) { return (function(a_in,a_out) { while (a_in.length > 0) { // or just "(a_in.length)" a_out.push(a_in.pop()); } return a_out; })(array,[]); } console.log(reverseArray(["A", "B", "C"])); ``` Or if the `Array.reverse()` method is allowed ```js const reverseArray = function(array) { return array.reverse(); } console.log(reverseArray(["A", "B", "C"])); ``` Upvotes: 2 <issue_comment>username_4: I put two versions for you. `reverse` will reverse the array without modifying the original array. `reverseModify` will do the same, but emptying the original array. Choose the version that suits you better ```js const arr = [1, 2, 3, 4]; function reverse(array) { let result = []; for (let i = array.length - 1; i >= 0; i--) { result.push(array[i]); } return result; } function reverseModify(array) { let result = []; while (array.length) { result.push(array.pop()); } return result; } console.log(reverse(arr), arr); console.log(reverseModify(arr), arr); ``` Upvotes: 0 <issue_comment>username_5: The problem is divides into two independent operations. 1. iterate all elements, usually take the length and perform a loop 2. `pop` a value and `push that value. This is the mechanism to take the last`item and push it to the end of the new array. The part for iterating needs only the actual length and a count down (by `pop`) to zero and then exit the loop. For a count up, you need two vvariables, one for the counter and one for the end or length of the array, which is a static value, take from the given array, which is later mutating. ```js function reverseArray(array) { var rev = []; while (array.length) { rev.push(array.pop()); } return rev; } console.log(reverseArray(["A", "B", "C"])); ``` Upvotes: 0 <issue_comment>username_6: The [**pop()**](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/pop) method removes the last element of an array, and returns that element. It changes the length of an array. ```js const popArray = function(array) { for (let x = 0; x <= array.length; x++) { console.log("value of x :", x); console.log("array.length :", array.length); array.pop(); console.log("array after", x + 1, "pop :", array); } }; popArray(["A", "B", "C"]); ``` In the above `demo` we can see that everytime pop() happens on an array. `array.length` will decrease and value of `x` will be increase. Hence, At certain point of time (after 2nd iteration) value of `x` will be `greater` then the length of an array. Use **Array.from()** method : ```js let rev = []; const reverseArray = function(array) { var destinationArray = Array.from(array); for (var i in Array.from(array)) { rev.push(array.pop()); console.log(rev, array); } }; reverseArray(["A", "B", "C"]); ``` Upvotes: 1
2018/03/17
1,263
4,064
<issue_start>username_0: ``` Sub Test() Dim Cell As Range With Sheets(1) ' loop column H untill last cell with value (not entire column) For Each Cell In .Range("A1:A" & .Cells(.Rows.Count, "A").End(xlUp).Row) If Cell.Value = "0" Then ' Copy>>Paste in 1-line (no need to use Select) .Rows(Cell.Row).Copy Destination:=Sheets(4).Rows(Cell.Row) End If Next Cell End With End Sub ```<issue_comment>username_1: Try ``` Sub Test() Dim Cell As Range With Sheets(1) ' loop column H untill last cell with value (not entire column) For Each Cell In .Range("A1:A" & .Cells(.Rows.Count, "A").End(xlUp).Row) If Cell.Value = "0" Then ' Copy>>Paste in 1-line (no need to use Select) Dim targetRow As Long targetRow = Cell.Row .Range("A" & targetRow & ":H" & targetRow).Copy Destination:=Sheets(4).Rows(Cell.Row) End If Next Cell End With End Sub ``` But --- as that produces a weird repeated paste, did you actually want? Note I replaced the variable cell with currentCell to avoid confusion ``` Option Explicit Sub Test() Dim currentCell As Range With Sheets(1) For Each currentCell In .Range("A1:A" & .Cells(.Rows.Count, "A").End(xlUp).Row) If currentCell.Value = "0" Then Dim targetRow As Long targetRow = currentCell.Row .Range("A" & targetRow & ":H" & targetRow).Copy Destination:=Sheets(4).Cells(currentCell.Row, 1) End If Next currentCell End With End Sub ``` And: ---- As noted by @username_4 you mention column H in your comment. If you intend to loop this then change: ``` For Each currentCell In .Range("A1:A" & .Cells(.Rows.Count, "A").End(xlUp).Row) ``` For ``` For Each currentCell In .Range("H1:H" & .Cells(.Rows.Count, "H").End(xlUp).Row) ``` Upvotes: 2 <issue_comment>username_2: try ``` Sub Test() Dim Cell As Range With Sheets(1) ' loop column H untill last cell with value (not entire column) For Each Cell In .Range("A1:A" & .Cells(.Rows.Count, "A").End(xlUp).Row) If Cell.Value = "0" Then ' Copy>>Paste in 1-line (no need to use Select) Sheets(4).Range("A:" & Cell.Row & ":M" & Cell.Row).Value = .Range("A:" & Cell.Row & ":M" & Cell.Row).Value End If Next Cell End With End Sub ``` Upvotes: 1 <issue_comment>username_3: Here are some other options for replacing this: ``` .Rows(Cell.Row).Copy ``` With either of the following: 1. `Cell.Resize(1,13).Copy` 2. `Intersect(Cell.EntireRow, .Range("A:M")).Copy` Or you can loop through the *row* rather than a specific cell: ``` Sub Test() Dim currentRow As Range With Sheets(1) ' loop column H untill last cell with value (not entire column) For Each currentRow In .Range("A1:M" & .Cells(.Rows.Count, "A").End(xlUp).Row).Rows If currentRow.Cells(1, 1).Value = "0" Then ' Copy>>Paste in 1-line (no need to use Select) currentRow.Copy Destination:=Sheets(4).Rows(currentRow.Row) End If Next Cell End With End Sub ``` Upvotes: 2 <issue_comment>username_4: use ``` .Range("A:M").Rows(Cell.row).Copy Destination:=Sheets(4).Rows(Cell.row) ``` but you may be in the wrong column, since you're commenting: ``` ' loop column H until last cell with value (not entire column) ``` while you coding: ``` For Each Cell In .Range("A1:A" & .Cells(.Rows.Count, "A").End(xlUp).row) ``` i.e. your looping through column A! so you may want to code: ``` Sub Test() Dim cell As Range With Sheets(1) ' loop column H until last cell with value (not entire column) For Each cell In .Range("H1:H" & .Cells(.Rows.Count, "A").End(xlUp).row) ' loop through column H cells from row 1 down to last not empty row in column A If cell.Value = "0" Then .Range("A:M").Rows(cell.row).Copy Destination:=Sheets(4).Rows(cell.row) Next cell End With End Sub ``` Upvotes: 2
2018/03/17
2,981
13,125
<issue_start>username_0: In this article, <https://www.geeksforgeeks.org/inline-functions-cpp/>, it states that the disadvantages of inlining are: > > 3) Too much inlining can also reduce your instruction cache hit rate, thus reducing the speed of instruction fetch from that of cache memory to that of primary memory. > > > How does inlining affect the instruction cache hit rate? > > 6) Inline functions might cause thrashing because inlining might increase size of the binary executable file. Thrashing in memory causes performance of computer to degrade. > > > How does inlining increase the size of the binary executable file? Is it just that it increases the code base length? Moreover, it is not clear to me why having a larger binary executable file would cause thrashing as the two dont seem linked.<issue_comment>username_1: Lets say you have a function thats 100 instructions long and it takes 10 instructions to call it whenever its called. That means for 10 calls it uses up 100 + 10 \* 10 = 200 instructions in the binary. Now lets say its inlined everywhere its used. That uses up 100\*10 = 1000 instructions in your binary. So for point 3 this means that it will take significantly more space in the instructions cache (different invokations of an inline function are not 'shared' in the i-cache) And for point 6 your total binary size is now bigger, and a bigger binary size can lead to thrashing Upvotes: 3 <issue_comment>username_2: Generally speaking, inlining tends to increase the emitted code size due to call sites being replaced with larger pieces of code. Consequently, more memory space may be required to hold the code, which may cause thrashing. I'll discuss this in a little more detail. > > How does inlining affect the instruction cache hit rate? > > > The impact that inlining can have on performance is very difficult to statically characterize in general without actually running the code and measuring its performance. Yes, inlining may impact the code size and typically makes the emitted native code larger. Let's consider the following cases: * The code executed within a particular period of time fits within a particular level of the memory hierarchy (say L1I) in both cases (with or without inlining). So performance with respect to that particular level will not change. * The code executed within a particular period of time fits within a particular level of the memory hierarchy in the case of no inlining, but doesn't fit with inlining. The impact this can have on performance depends on the locality of the executed. Essentially, if the hottest pieces of code first within that level of memory, then the miss ratio at the level might increase slightly. Features of modern processors such as speculative execution, out-of-order execution, prefetching can hide or reduce the penalty of the additional misses. It's important to note that inlining does improve the locality of code, which can result in a net positive imapct on performance despite of the increased code size. This is particularly true when the code inlined at a call site is frequently executed. Partial inlining techniques have been developed to inline only the parts of the function that are deemed hot. * The code executed within a particular period of time doesn't fit within a particular level of the memory hierarchy in both cases. So performance with respect to that particular level will not change. > > Moreover, it is not clear to me why having a larger binary executable > file would cause thrashing as the two dont seem linked. > > > Consider the main memory level on a resource-constrained system. Even a mere 5% increase in code size can cause thrashing at main memory, which can result in significant performance degradations. On other resource-rich systems (desktops, workstations, servers), thrashing usually only occurs at caches when the total size of the hot instructions is too large to fit within one or more of the caches. Upvotes: 1 <issue_comment>username_3: If compilers inlined everything they could, most functions would be gigantic. (Although you might just have one gigantic `main` function that calls library functions, but in the most extreme case all functions within your program would be inlined into `main`). Imagine if everything was a macro instead of a function, so it fully expanded everywhere you used it. This is the source-level version of inlining. --- Most functions have multiple call-sites. The code-size to call a function scales a bit with the number of args, but is generally pretty small compared to a medium to large function. So inlining a large function at all of its call sites will increase total code size, reducing I-cache hit rates. But these days its common practice to write lots of *small* wrapper / helper functions, especially in C++. The code for a stand-alone version of a small function is often not much bigger than code necessary to call it, especially when you include the side-effects of a function call (like clobbering registers). Inlining small functions can often save code size, especially when further optimizations become possible after inlining. (e.g. the function computes some of the same stuff that code outside the function also computes, so [CSE](https://en.wikipedia.org/wiki/Common_subexpression_elimination) is possible). So for a compiler, the decision of whether to inline into any specific call site or not should be based on the size of called function, and maybe whether its called inside a loop. (Optimizing away the call/ret overhead is more valuable if the call site runs more often.) Profile-guided optimization can help a compiler make better decisions, by "spending" more code-size on hot functions, and saving code-size in cold functions (e.g. many functions only run once over the lifetime of the program, while a few hot ones take most of the time). *If* compilers didn't have good heuristics for when to inline, or you override them to be *way* too aggressive, then yes, I-cache misses would be the result. **But modern compilers *do* have good inlining heuristics, and usually this makes programs significantly faster but only a bit larger**. The article you read is talking about why there need to be limits. --- The above code-size reasoning should make it obvious that executable size increases, because it doesn't shrink the data any. Many functions will still have a stand-alone copy in the executable as well as inlined (and optimized) copies at various call sites. There are a few factors that mitigate the I-cache hit rate problem. Better locality (from not jumping around as much) let code prefetch do a better job. Many programs spend most of their time in a small part of their total code, which usually still fits in I-cache after a bit of inlining. But larger programs (like Firefox or GCC) have lots of code, and call the same functions from many call sites in large "hot" loops. Too much inlining bloating the total code size of each hot loop would hurt I-cache hit rates for them. > > Thrashing in memory causes performance of computer to degrade. > > > <https://en.wikipedia.org/wiki/Thrashing_(computer_science)> On modern computers with multiple GiB of RAM, thrashing of virtual memory (paging) is not plausible unless every program on the system was compiled with extremely aggressive inlining. These days most memory is taken up by data, not code (especially pixmaps in a computer running a GUI), so **code would have to explode by a few orders of magnitude to start to make a real difference in overall memory pressure.** Thrashing the I-cache is pretty much the same thing as having lots of I-cache misses. But it would be possible to go beyond that into thrashing the larger unified caches (L2 and L3) that cache code + data. Upvotes: 2 <issue_comment>username_4: It is possible that the confusion about why inlining can hurt i-cache hit rate or cause thrashing lies in the difference between static instruction count and dynamic instruction count. Inlining (almost always) reduces the latter but often increases the former. Let us briefly examine those concepts. Static Instruction Count ------------------------ Static instruction count for some execution trace is the number of unique instructions0 that appear in the binary image. Basically, you just count the instruction lines in an assembly dump. The following snippet of x86 code has a static instruction count of 5 (the `.top:` line is a label which doesn't translate to anything in the binary): ``` mov eci, 10 mov eax, 0 .top: add eax, eci dec eci jnz .top ``` The static instruction count is mostly important for binary size, and caching considerations. Static instruction count may also be referred to simply as "code size" and I'll sometimes use that term below. Dynamic Instruction Count ------------------------- The *dynamic* instruction count, on the other hand, depends on the actual runtime behavior and is the number of instructions *executed*. The same static instruction can be counted multiple times due to loops and other branches, and some instructions included in the static count may never execute at all and so no count in the dynamic case. The snippet as above, has a dynamic instruction count of `2 + 30 = 32`: the first two instructions are executed once, and then the loop executes 10 times with 3 instructions each iteration. As a very rough approximation, dynamic instruction count is primarily important for runtime performance. The Tradeoff ------------ Many optimizations such as loop unrolling, function cloning, vectorization and so on increase code size (static instruction count) in order to improve runtime performance (often strongly correlated with dynamic instruction count). Inlining is also such an optimization, although with the twist that for some call sites inlining *reduces* both dynamic *and* static instruction count. > > How does inlining affect the instruction cache hit rate? > > > The article mentioned *too much* inlining, and the basic idea here is that a lot of inlining increases the code footprint by increasing the working set's *static instruction count* while usually reducing its *dynamic instruction count*. Since a typical instruction cache1 caches static instructions, a larger static footprint means increased cache pressure and often results in a worse cache hit rate. The increased static instruction count occurs because inlining essentially duplicates the function body at each the call site. So rather than one copy of the function body an a few instructions to call the function `N` times, you end up with `N` copies of the function body. Now this is a rather *naive* model of how inlining works since *after* inlining, it may be the case that further optimizations can be done in the context of a particular call-site, which may dramatically reduce the size of the inlined code. In the case of very small inlined functions or a large amount of subsequent optimization, the resulting code may be even *smaller* after inlining, since the remaining code (if any) may be smaller than the overhead involved in the calling the function2. Still, the basic idea remains: too much inlining can bloat the code in the binary image. The way the i-cache works depends on the *static instruction count* for some execution, or more specifically the number of instruction cache lines touched in the binary image, which is largely a fairly direct function of the static instruction count. That is, the i-cache caches regions of the binary image so the more regions and the larger they are, the larger the cache footprint, even if the dynamic instruction count happens to be lower. > > How does inlining increase the size of the binary executable file? > > > It's exactly the same principle as the i-cache case above: larger static footprint means that more distinct pages need to paged in, potentially leading to more pressure on the VM system. Now we usually measure code sizes in megabytes, while memory on servers, desktops, etc are usually measured in gigabytes, so it's highly unlikely that excessive inlining is going to meaningfully contribute to the thrashing on such systems. It could perhaps be a concern on much smaller or embedded systems (although the latter often don't have a MMU at at all). --- 0 Here *unique* refers, for example, to the IP of the instruction, not to the actual value of the encoded instruction. You might find `inc eax` in multiple places in the binary, but each are unique in this sense since they occur at a different location. 1 There are exceptions, such as some types of trace caches. 2 On x86, the necessary overhead is pretty much just the `call` instruction. Depending on the call site, there may also be other overhead, such as shuffling values into the correct registers to adhere to the ABI, and spilling caller-saved registers. More generally, there may be a large cost to a function call simply because the compiler has to reset many of its assumptions across a function call, such as the state of memory. Upvotes: 4 [selected_answer]
2018/03/17
553
1,978
<issue_start>username_0: I have a solr indexed table in which I want to filter the results if one column is greater than the other, so far I am able to specify the range of a particular column but cannot do comparing of two columns. I read in this [post](https://stackoverflow.com/questions/36996043/how-to-compare-two-columns-using-solr) that I need to create a new column in which I have to store the comparison and then fetch the results from there. But I have no clue on how to begin with it. Any help will be appreciated. Edit 1: Show results if column1 > column2<issue_comment>username_1: You can implement a function query in a `fq` parameter (`fq` stands for filter query, but accepts a query parsed that allows functions): ``` fq={!frange l=0}sub(column1,column2) ``` (If you don't want to include documents where column1 and column2 are equal, change 0 to `0.1` or something similar) If you're always comparing these two columns in the same, the previous answer about doing it at index time is really the best way, as it allows Solr to cache the result and built a proper index across the result. Exactly how you do that depends on how you're indexing, so it'll be up to your own code. You'll define a boolean field `column1_larger_than_column2` (which has the BoolField type), then assign a value based on the result of the subtraction: ``` doc.set('column1_larger_than_column2', (column1 - column2) > 0) ``` Upvotes: 1 <issue_comment>username_2: @username_1 Great answer! But I think there is an error in writing. It should be sub, not sum: ``` fq={!frange l=0 incl=false}sub(column1,column2) ``` to find only documents where column1's value is greater than column2's value or ``` {!frange l=0 incl=false}sub(column2,column1) ``` to find only documents where column1's value is lower than column2's value. Also "incl=false" ensures that column1 is always greater/lower than column2, never equal. To find >= or <= just omit "incl=false". Upvotes: 0
2018/03/17
653
2,376
<issue_start>username_0: I am referring to this feedback: [Azure Storage Blob Trigger to fire when files added in Sub Folders](https://feedback.azure.com/forums/287593-logic-apps/suggestions/20164843-azure-storage-blob-trigger-to-fire-when-files-adde) I have a Azure Logic App that is fired every time a blob is added in a container. This works fine when all the documents are at the root of the container. [![Azure Logic App - Blob Trigger](https://i.stack.imgur.com/8vOJI.png)](https://i.stack.imgur.com/8vOJI.png) Inside my container, I have a dynamic number of (virtual) sub-folders. When I add a new document in a subfolder (path = `mysubfolder/mynewdocument.txt`), the logic app is not triggered. This does not really make sense for me as sub-folders in the blob container are virtual. Does anyone find a workaround except putting all the files at the root level ? I've opened an issue on Github: <https://github.com/Azure/logicapps/issues/20><issue_comment>username_1: > > This does not really make sense for me as sub-folders in the blob container are virtual. Does anyone find a workaround except putting all the files at the root level ? > > > I also can reproduce it on my side. I recommend that you could use the Azure function app blob trigger to instead of Azure Logic App blob trigger. Azure blob trigger function could be fired when you add a new document in a subfolder(virtual). Upvotes: 2 <issue_comment>username_2: This is the expected behavior. None of the Logic App Triggers that work with 'files' support subfolders. This has been the case with BizTalk Server as well since 2000 so I would not expect a change anytime soon :(. Please create or vote on a User Voice for this issue: [User Voice - Logic Apps](https://feedback.azure.com/forums/287593-logic-apps) Upvotes: 3 [selected_answer]<issue_comment>username_3: At the time I was developing this feature (early 2018), [EventGrid](https://azure.microsoft.com/en-us/services/event-grid/) was still in preview so I've ended up using [Azure Function - Blob trigger](https://learn.microsoft.com/en-us/azure/azure-functions/functions-create-storage-blob-triggered-function). I would definitely use [EventGrid - Blob Event](https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blob-event-overview) now and it works for Logic App / Function App or with any Http endpoint. Upvotes: 1
2018/03/17
214
749
<issue_start>username_0: In my view file, I have this filter ``` ``` And I have set that filter to be enabled by default. ``` "search_default_today":1 ``` In my database, my table has a date column which is a date type (not DateTime). Ubuntu server's time, timezone and date settings already correct. Postgresql timezone set to local time. But when I open the list view it shows yesterday's record. The filter works but not in the right way. When I discard the filter it shows all. When I apply the filter it shows yesterday's record. What did I do wrong here?<issue_comment>username_1: Tried this, ``` ``` hope it will help you. Upvotes: 0 <issue_comment>username_2: Try this: ```xml ``` OR: ```xml ``` Upvotes: 2 [selected_answer]
2018/03/17
1,250
4,499
<issue_start>username_0: I am trying to drop foreign key using migration. Here is my code ``` public function up() { Schema::table('tbl_social_media_links', function (Blueprint $table) { $table->renameColumn('vchr_link', 'vchr_social_media_link'); $table->dropColumn('vchr_social_media_name'); $table->integer('fk_int_business_id')->unsigned()->after('pk_int_sm_id'); $table->foreign('fk_int_business_id')->references('pk_int_business_id') ->on('tbl_business_details')->onDelete('cascade'); $table->integer('int_social_media_type')->after('fk_int_business_id'); }); } /** * Reverse the migrations. * * @return void */ public function down() { Schema::table('tbl_social_media_links', function (Blueprint $table) { Schema::disableForeignKeyConstraints(); $table->string('vchr_social_media_name')->after('pk_int_sm_id'); $table->dropColumn('fk_int_business_id'); $table->dropColumn('int_social_media_type'); $table->renameColumn('vchr_social_media_link', 'vchr_link'); Schema::enableForeignKeyConstraints(); }); } ``` **ALSO TRIED** $table->dropForiegn('fk\_int\_business\_id'); I keep getting errors like ``` General error: 1553 Cannot drop index 'tbl_social_media_links_fk_int_business_id_foreign': needed in a foreign key constraint ( SQL: alter table `tbl_social_media_links` drop `fk_int_business_id`) ``` Can someone please help me get it to work. I even tried with sql to drop it but it says `Can't DROP 'fk_int_business_id'; check that column/key exists`<issue_comment>username_1: You can't drop it until other table, or column depends on it. Here is some similar question: [MySQL Cannot drop index needed in a foreign key constraint](https://stackoverflow.com/questions/8482346/mysql-cannot-drop-index-needed-in-a-foreign-key-constraint) Upvotes: 0 <issue_comment>username_2: This is how I would do it: ``` /** * Run the migrations. * * @return void */ public function up() { Schema::table('tbl_social_media_links', function (Blueprint $table) { $table->foreign('fk_int_business_id') ->references('pk_int_business_id') ->on('tbl_business_details') ->onDelete('cascade');; }); } /** * Reverse the migrations. * * @return void */ public function down() { Schema::table('tbl_social_media_links', function (Blueprint $table) { $table->dropForeign(['fk_int_business_id']); }); } ``` Notice that I'm sending using a string array as an argument in `dropForeign` instead of a string, because the behaviour is different. With an array, you can use the column name, but with a string, you need to use the key name instead. However, I found out that, even when it removes the foreign key, an index key remains, so this would solve it: ``` /** * Run the migrations. * * @return void */ public function up() { Schema::table('tbl_social_media_links', function (Blueprint $table) { $table->foreign('fk_int_business_id') ->references('pk_int_business_id') ->on('tbl_business_details') ->onDelete('cascade');; }); } /** * Reverse the migrations. * * @return void */ public function down() { Schema::table('tbl_social_media_links', function (Blueprint $table) { $index = strtolower('tbl_social_media_links'.'_'.implode('_', ['fk_int_business_id']).'_foreign'); $index = str_replace(['-', '.'], '_', $index); $table->dropForeign($index); $table->dropIndex($index); }); } ``` Also notice that, in this case, I'm generating the key name instead of using an array, as I couldn't find any other way to drop the index key with just the column name. On the other hand, be aware that you may need to split the statements to drop the column: ``` /** * Reverse the migrations. * * @return void */ public function down() { Schema::table('tbl_social_media_links', function (Blueprint $table) { $index = strtolower('tbl_social_media_links'.'_'.implode('_', ['fk_int_business_id']).'_foreign'); $index = str_replace(['-', '.'], '_', $index); $table->dropForeign($index); $table->dropIndex($index); }); Schema::table('tbl_social_media_links', function (Blueprint $table) { $table->dropColumn('fk_int_business_id'); }); } ``` Upvotes: 2
2018/03/17
934
3,508
<issue_start>username_0: I have a research project which is focusing on understanding user's curiosity and connecting to Free WIFI including remote areas where internet is poor here. To do this, we developed simple webpages with the first as a Visitor's form where we need a user/visitor to enter their data and let it be retrieved, saved or viewed later as user logs. We want to do this by simply having a Router and a Flash disk where the webfiles and data storing system will be...No server! Is this very possible with Javascript, xml and or any other languages anyone has ever done this? That is except Javascript's LocalStorage which in this say, the user will be the one to have his/her own data.<issue_comment>username_1: You can't drop it until other table, or column depends on it. Here is some similar question: [MySQL Cannot drop index needed in a foreign key constraint](https://stackoverflow.com/questions/8482346/mysql-cannot-drop-index-needed-in-a-foreign-key-constraint) Upvotes: 0 <issue_comment>username_2: This is how I would do it: ``` /** * Run the migrations. * * @return void */ public function up() { Schema::table('tbl_social_media_links', function (Blueprint $table) { $table->foreign('fk_int_business_id') ->references('pk_int_business_id') ->on('tbl_business_details') ->onDelete('cascade');; }); } /** * Reverse the migrations. * * @return void */ public function down() { Schema::table('tbl_social_media_links', function (Blueprint $table) { $table->dropForeign(['fk_int_business_id']); }); } ``` Notice that I'm sending using a string array as an argument in `dropForeign` instead of a string, because the behaviour is different. With an array, you can use the column name, but with a string, you need to use the key name instead. However, I found out that, even when it removes the foreign key, an index key remains, so this would solve it: ``` /** * Run the migrations. * * @return void */ public function up() { Schema::table('tbl_social_media_links', function (Blueprint $table) { $table->foreign('fk_int_business_id') ->references('pk_int_business_id') ->on('tbl_business_details') ->onDelete('cascade');; }); } /** * Reverse the migrations. * * @return void */ public function down() { Schema::table('tbl_social_media_links', function (Blueprint $table) { $index = strtolower('tbl_social_media_links'.'_'.implode('_', ['fk_int_business_id']).'_foreign'); $index = str_replace(['-', '.'], '_', $index); $table->dropForeign($index); $table->dropIndex($index); }); } ``` Also notice that, in this case, I'm generating the key name instead of using an array, as I couldn't find any other way to drop the index key with just the column name. On the other hand, be aware that you may need to split the statements to drop the column: ``` /** * Reverse the migrations. * * @return void */ public function down() { Schema::table('tbl_social_media_links', function (Blueprint $table) { $index = strtolower('tbl_social_media_links'.'_'.implode('_', ['fk_int_business_id']).'_foreign'); $index = str_replace(['-', '.'], '_', $index); $table->dropForeign($index); $table->dropIndex($index); }); Schema::table('tbl_social_media_links', function (Blueprint $table) { $table->dropColumn('fk_int_business_id'); }); } ``` Upvotes: 2
2018/03/17
607
1,714
<issue_start>username_0: I want to submit a series of jobs to a cluster, I am using a software that runs through each line of a file, but it has an option to define intervals of lines so you can parallelise it. the command line looks like this: ``` # Run MetaTissueMM (Mixed Model) to obtain estimates of effects ~/Meta-Tissue/Meta-Tissue.v.0.5/./MetaTissueMM \ --expr ~/Meta-Tissue/output_gene.txt \ --geno /~Meta-Tissue/output_snp.txt \ --matrix ~/Meta-Tissue/matrix.txt \ --output ~/Meta-Tissue/MetaTissue \ --start_snp_index 0 \ --end_snp_index 1000 ``` The variables I want to modify are `--start_snp_index` and `--end_snp_index`, my file has 8743544 lines, so I would like to split them in 1000 intervals, so these two options would be `--start_snp_index 0 --end_snp_index 1000 --start_snp_index 1001 --end_snp_index 2000` and so on. My knowledge of bash is very limited I try to define range with `{..}` but obviously it doesn't work<issue_comment>username_1: Something like this: ``` #!/bin/bash for a in {0..8744} do MetaTissueMM --foo --bar --start_snp_index $((a*1000)) --end_snp_index $(((a+1)*1000)) done ``` Upvotes: 1 [selected_answer]<issue_comment>username_2: You could use a simple counting loop: ``` lines=8743544 for ((start = 0; start <= lines; start += 1000)); do prog --start_snp_index $start --end_snp_index $((start + 999)) done ``` In Bash 4, the brace expansion allows to specify increments, so you could use this: ``` for start in {0..8743544..1000}; do prog --start_snp_index $start --end_snp_index $((start + 999)) done ``` Note that I'm not a big fan of this brace expansion, because it doesn't allow using variables inside the `{..}` expression. Upvotes: 1
2018/03/17
593
1,801
<issue_start>username_0: ``` #include using namespace std; ``` The limit is to make 10 input numbers ``` const int LIMIT = 10; int main () { ``` This is just declaring variables ``` float counter ; int number ; int zeros; int odds; int evens; cout << "Please enter " << LIMIT << "integers, " << "positive, negative, or zeros." << endl; cout << "The numbers you entered are:" << endl; ``` From here I am trying to print out all the numbers that the user has inputted and checking if it is odd or even ``` for (counter = 1; counter <= LIMIT; counter++) { cin >> number; switch(number % 2) { case 0: ``` So here I am trying to count zeros as even numbers so at the end the output of even numbers will include zero if the list has a zero ``` if (number == 0 ){ zeros++; evens++; } case 1: case -1: odds++; } } cout << endl; cout << "There are " << evens << " evens," <<"which includes " << zeros << " zeros." < ```<issue_comment>username_1: Something like this: ``` #!/bin/bash for a in {0..8744} do MetaTissueMM --foo --bar --start_snp_index $((a*1000)) --end_snp_index $(((a+1)*1000)) done ``` Upvotes: 1 [selected_answer]<issue_comment>username_2: You could use a simple counting loop: ``` lines=8743544 for ((start = 0; start <= lines; start += 1000)); do prog --start_snp_index $start --end_snp_index $((start + 999)) done ``` In Bash 4, the brace expansion allows to specify increments, so you could use this: ``` for start in {0..8743544..1000}; do prog --start_snp_index $start --end_snp_index $((start + 999)) done ``` Note that I'm not a big fan of this brace expansion, because it doesn't allow using variables inside the `{..}` expression. Upvotes: 1
2018/03/17
515
1,741
<issue_start>username_0: i get the following error message from the terminal (powershell) when i try to run my echo.js file using node js ```js PS C:\Users\ASUS\Dropbox\Web Development\Backend\Intro To Node> node echo.js C:\Users\ASUS\Dropbox\Web Development\Backend\Intro To Node\echo.js:1 (function (exports, require, module, __filename, __dirname) { �� ^ SyntaxError: Invalid or unexpected token at createScript (vm.js:80:10) at Object.runInThisContext (vm.js:139:10) at Module._compile (module.js:616:28) at Object.Module._extensions..js (module.js:663:10) at Module.load (module.js:565:32) at tryModuleLoad (module.js:505:12) at Function.Module._load (module.js:497:3) at Function.Module.runMain (module.js:693:10) at startup (bootstrap_node.js:188:16) at bootstrap_node.js:609:3 ```<issue_comment>username_1: Your `echo.js` file is most likely UTF-16LE-encoded, which is an encoding that `node` doesn't support. (The `��` are the BOM (Unicode byte-order mark) characters, which `node` misinterprets as *data*). To fix this problem, **save `echo.js` using *UTF-8* encoding**. While you can choose UTF-8 *without* a BOM, it isn't necessary and better avoided. Upvotes: 4 [selected_answer]<issue_comment>username_2: In Mac, use TextEdit. Go to FORMAT and select Make Plain Text. Then while saving file, choose plain text encoding as Customize Encoding List and select Unicode (UTF-8) and unselect everything else. Also, you get this error when you copy paste code. In that case ensure you replace your single inverted commas as the format is different when you copy paste code. Replacing should make the error go away. Upvotes: 0
2018/03/17
252
866
<issue_start>username_0: When I open a new terminal and execute "export", I always can find "http\_proxy=127.0.0.1:1080" there. But after checking .bashrc in my home folder, there is nothing related with the variable "http\_proxy". So the question is, where is the environment variable http\_proxy may be setting? Or how can I find out the location where "http\_proxy" is setting?<issue_comment>username_1: Check `/etc/environment`,`/etc/bashrc`. Check what is system default shell, if you use `zsh` maybe you could find that in `~/.zshrc`. Upvotes: -1 [selected_answer]<issue_comment>username_2: The grep in terminal may help, too: ``` grep --exclude-dir={proc,tmp,mnt,sys,boot,run} --exclude=\*.{md,js,html} -rnw / -e 'http_proxy'>>http_proxy.log ``` After print into current folder http\_proxy.log, we can try to find where the http\_proxy is set. Upvotes: 1
2018/03/17
599
2,029
<issue_start>username_0: When I execute the command `./byfn.sh -m up` The following error occurs: > > Starting with channel 'mychannel' and CLI timeout of '10' seconds and CLI delay of '3' seconds > > > Continue? [Y/n] y > > > proceeding ... > > > docker: Cannot connect to the Docker daemon at tcp://127.0.0.1:4243. Is the docker daemon running?. > > > See 'docker run --help'. > > > LOCAL\_VERSION=1.1.0-rc1 > > > DOCKER\_IMAGE\_VERSION= > > > =================== WARNING =================== > > > Local fabric binaries and docker images are > > > out of sync. This may cause problems. > > > =============================================== > > > ERROR: Couldn't connect to Docker daemon. You might need to start Docker for Mac. > > > ERROR !!!! Unable to start network > > ><issue_comment>username_1: You have to re-sync the docker images first: ``` fabric-samples/scripts$ sudo ./fabric-preload.sh ``` Make sure you have the right version set in the script. In your case `VERSION=1.1.0-rc1` Upvotes: 0 <issue_comment>username_2: you have to give the permission, try `sudo ./byfn.sh -m up` Upvotes: 0 <issue_comment>username_3: This message caused from your Docker Daemon since it not started yet, please start your docker first. Upvotes: 0 <issue_comment>username_4: You are trying to use diiferent versions of the Fabric and the Docker images. Upvotes: 0 <issue_comment>username_5: you can update the yum and docker's version,use:yum update,and url of docker:<https://docs.docker.com/install/linux/docker-ce/centos/> Upvotes: 0 <issue_comment>username_6: 1. Fetch bootstrap.sh from fabric repository using =============================================== curl -sS <https://raw.githubusercontent.com/hyperledger/fabric/master/scripts/bootstrap.sh> -o ./scripts/bootstrap.sh 2. Change file mode to executable ============================== chmod +x ./scripts/bootstrap.sh 3. Download binaries and docker images =================================== ./scripts/bootstrap.sh [version] Upvotes: 2
2018/03/17
1,393
5,798
<issue_start>username_0: I started learning Java Agent few days ago. But documentation is not very good and beginners like me struggling to understand the basics. I created a basic multiplier class and export it to runnable jar using eclipse. Here is the code snippet. **Main jar file:** ``` public class Multiplier { public static void main(String[] args) { int x = 10; int y = 25; int z = x * y; System.out.println("Multiply of x*y = " + z); } } ``` [Bytecode for above class](https://i.stack.imgur.com/WPD1v.png) Now I want to manipulate the value of x from an agent. I tried to create the Agent class like this **Agent:** ``` package myagent; import org.objectweb.asm.*; import java.lang.instrument.*; public class Agent { public static void premain(final String agentArg, final Instrumentation inst) { System.out.println("Agent Started"); int x_modified = 5; //Now How to push the new value (x_modified) to the multiplier class? //I know I have to use ASM but can't figure it out how to do it. //Result should be 125 } } ``` **My Question** How do I set the value of x from agent class to multiplier class using ASM? Result should be 125.<issue_comment>username_1: You have declared x inside main method. So it's scope is local. That's why you can't change the value of x from any other class. Upvotes: 1 <issue_comment>username_2: To use ASM you need a custom CodeWriter in a custom ClassWriter which you pass to a ClassReader. <http://asm.ow2.org/doc/tutorial.html> This will allow you to visit all instructions in the code for each method. In particular you will need to override the `visitIntInsn` method so when you see the first `BIPUSH` instruction in `main` you can replace the value 10, with what ever value you chose. The output of ClassWriter is a byte[] which your Instrumentation will return instead of the original code at which point `x` will be whatever value you made it in the code. Upvotes: 0 <issue_comment>username_3: The first thing, your agent has to do, is [registering a `ClassFileTransformer`](https://docs.oracle.com/javase/8/docs/api/java/lang/instrument/Instrumentation.html#addTransformer-java.lang.instrument.ClassFileTransformer-). The first thing, the class file transformer should do in its `transform` method, is checking the arguments to find out whether the current request is about the class we’re interested in, to return immediately if not. If we are at the class we want to transform, we have to process the incoming class file bytes to return a new byte array. You can use ASM’s [`ClassReader`](http://asm.ow2.org/asm50/javadoc/user/?org/objectweb/asm/ClassReader.html) to process to incoming bytes and chain it to a [`ClassWriter`](http://asm.ow2.org/asm50/javadoc/user/?org/objectweb/asm/ClassWriter.html) to produce a new array: ``` import java.lang.instrument.*; import java.security.ProtectionDomain; import org.objectweb.asm.*; public class ExampleAgent implements ClassFileTransformer { private static final String TRANSFORM_CLASS = "Multiplier"; private static final String TRANSFORM_METHOD_NAME = "main"; private static final String TRANSFORM_METHOD_DESC = "([Ljava/lang/String;)V"; public static void premain(String arg, Instrumentation instrumentation) { instrumentation.addTransformer(new ExampleAgent()); } public byte[] transform(ClassLoader loader, String className, Class cl, ProtectionDomain pd, byte[] classfileBuffer) { if(!TRANSFORM_CLASS.equals(className)) return null; ClassReader cr = new ClassReader(classfileBuffer); ClassWriter cw = new ClassWriter(cr, 0); cr.accept(new ClassVisitor(Opcodes.ASM5, cw) { @Override public MethodVisitor visitMethod(int access, String name, String desc, String signature, String[] exceptions) { MethodVisitor mv = super.visitMethod( access, name, desc, signature, exceptions); if(name.equals(TRANSFORM_METHOD_NAME) && desc.equals(TRANSFORM_METHOD_DESC)) { return new MethodVisitor(Opcodes.ASM5, mv) { @Override public void visitIntInsn(int opcode, int operand) { if(opcode == Opcodes.BIPUSH && operand == 10) operand = 5; super.visitIntInsn(opcode, operand); } }; } return mv; } }, 0); return cw.toByteArray(); } } ``` Note that by passing the `ClassWriter` to our custom `ClassVisitor`’s constructor and passing the `MethodVisitor` returned by the `super.visitMethod` invocation to our `MethodVisitor`’s constructor, we enable a chaining that reproduces the original class by default; all methods we’re not overriding will delegate to the specified `ClassWriter`/`MethodVisitor` reproducing the encountered artifact. Compare with [the tutorial about ASM’s event model](http://asm.ow2.org/doc/tutorial-asm-2.0.html). The example above enables an optimization by also passing the `ClassReader` instance to the `ClassWriter`’s constructor. This improves the efficiency of instrumenting a class when only making small changes, like we do here. The crucial part is overriding `visitMethod` to return our custom `MethodVisitor` when we are at the “hot” method and overriding `visitIntInsn` to change the desired instruction. Note how these methods delegate to the `super` calls when not altering the behavior, just like the methods we didn’t override. Upvotes: 3 [selected_answer]
2018/03/17
626
2,123
<issue_start>username_0: When a `BUILD` phaser is called, it overrides default attribute assignment in Perl6. Suppose we have to use that BUILD phaser, like we do in [this module](https://github.com/fayland/perl6-WebService-GitHub/blob/master/lib/WebService/GitHub/Role.pm) (that's where I met this problem). What's the way of assigning values to attributes in that phase? I have used this ``` class my-class { has $.dash-attribute; submethod BUILD(*%args) { for %args.kv -> $k, $value { self."$k"( $value ); } } }; my $my-instance = my-class.new( dash-attribute => 'This is the attribute' ); ``` And I get this error ``` Too many positionals passed; expected 1 argument but got 2 ``` Other combinations of `$!`or `$.`, direct assignment, declaring the attribute as `rw` (same error) yield different kind of errors. This is probably just a syntax issue, but I couldn't find the solution. Any help will be appreciated.<issue_comment>username_1: There are two things wrong in your example, the way I see it. First of all, if you want an attribute to be writeable, you will need to mark it `is rw`. Secondly, changing the value of an attribute is done by assignment, rather than by giving the new value as an argument. So I think the code should be: ``` class my-class { has $.dash-attribute is rw; submethod BUILD(*%args) { for %args.kv -> $k, $value { self."$k"() = $value; } } }; my $my-instance = my-class.new( dash-attribute => 'attribute value' ); dd $my-instance; # my-class $my-instance = my-class.new(dash-attribute => "attribute value") ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: You could do it the same way the object system normally does it under the hood for you. (not recommended) ```perl6 class C { has $.d; submethod BUILD ( *%args ){ for self.^attributes { my $short-name = .name.substr(2); # remove leading 「$!」 next unless %args{$short-name}:exists; .set_value( self, %args{$short-name} ) } } } say C.new(d => 42) ``` ```none C.new(d => 42) ``` Upvotes: 2
2018/03/17
433
1,422
<issue_start>username_0: I'm using **Swift 3**. I've already made this chat bubbles, but I want it to change sizes... for example, when there 4 lines of text it's not changing size (making bigger), and I want it to do automatically. How can I do it? Thank You!<issue_comment>username_1: There are two things wrong in your example, the way I see it. First of all, if you want an attribute to be writeable, you will need to mark it `is rw`. Secondly, changing the value of an attribute is done by assignment, rather than by giving the new value as an argument. So I think the code should be: ``` class my-class { has $.dash-attribute is rw; submethod BUILD(*%args) { for %args.kv -> $k, $value { self."$k"() = $value; } } }; my $my-instance = my-class.new( dash-attribute => 'attribute value' ); dd $my-instance; # my-class $my-instance = my-class.new(dash-attribute => "attribute value") ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: You could do it the same way the object system normally does it under the hood for you. (not recommended) ```perl6 class C { has $.d; submethod BUILD ( *%args ){ for self.^attributes { my $short-name = .name.substr(2); # remove leading 「$!」 next unless %args{$short-name}:exists; .set_value( self, %args{$short-name} ) } } } say C.new(d => 42) ``` ```none C.new(d => 42) ``` Upvotes: 2
2018/03/17
1,136
4,188
<issue_start>username_0: I have a data frame as df1 which contains a column of the name of the university as University\_name and has 500000 number of rows. Now I have another data frame as df2 which contains 2 columns as university\_name and university\_aliases and has 150 rows. Now I want to match each university alias present in the university\_aliases column with university names present in university\_name\_new. a sample of df1$university\_name ``` university of auckland the university of auckland university of warwick - warwick business school unv of warwick seneca college of applied arts and technology seneca college univ of auckland ``` sample of df2 ``` University_Alias Univeristy_Name_new univ of auckland university of auckland universiry of auckland university of auckland auckland university university of auckland university of auckland university of auckland warwick university university of warwick warwick univercity university of warwick university of warwick university of warwick seneca college seneca college unv of warwick university of warwick ``` I am expecting output like this ``` university of auckland university of auckland university of warwick seneca college seneca college ``` and I am using the following code but it is not working ``` df$university_name[ grepl(df$university_name,df2$university_alias)] <- df2$university_name_new ```<issue_comment>username_1: You could do this ``` df2$University_Name_new[which(is.element(df2$University_Alias, df1$university_name))] ### which returns the following #### [1] "university of auckland" "seneca college" ``` Now for instance, in the data you provided `the university of auckland` is in `df1$university_name` but not in `df2$University_Alias`, which is why we have the following: ``` > which(is.element(df2$University_Alias, df1$university_name)) [1] 4 8 ``` Indeed, from `df1$university_name`, only `university of auckland` and `seneca college` are contained in `df2$University_Alias`. Upvotes: 0 <issue_comment>username_2: You can use `sapply` and `str_extract` to get the desired result. ``` # create sample data df1 <- data.frame(university_name = c('university of auckland', 'the university of auckland', 'university of warwick - warwick business school', 'seneca college of applied arts and technology', 'seneca college'), stringsAsFactors = F) # these are values to match (from df2) vals <- c('university of auckland','university of warwick','seneca college') # get the output df1$output <- sapply(df1$university_name, function(z)({ f <- vals[complete.cases(str_extract(string = z, pattern = vals))] return(f) }), USE.NAMES = F) print(df1) university_name output 1 university of auckland university of auckland 2 the university of auckland university of auckland 3 university of warwick - warwick business school university of warwick 4 seneca college of applied arts and technology seneca college 5 seneca college seneca college ``` **Update:** Based on my understanding, `df2` already has one to one mapping of `university_alias` with `university_name_new`, so that problem comes down to checking if a university\_alias is not present in df1, we remove it. ``` # check values for university_alias in university_name maps2 <- as.character(df2$university_alias[which(df2$university_alias %in% df1$university_name)]) # remove unmatched rows from df2 df3 <- df2[df2$university_alias %in% maps2,] print(df3) university_alias university_name_new 1 univ of auckland university of auckland 4 university of auckland university of auckland 8 seneca college seneca college 9 unv of warwick university of warwick ``` Upvotes: 2 [selected_answer]
2018/03/17
770
3,044
<issue_start>username_0: I have an action which in turn must affect many other areas of my app state. In this case, when the user selects a website from a dropdown list, it must update many other components. I'm currently doing it like so: ``` setSelectedWebsite (websiteId) { // The core reason for this component this.props.setSelectedWebsite(websiteId); // Fetch all associated accounts this.props.fetchAllAccounts(websiteId) // Several other "side effect" calls here... } ``` In this interest of making one component serve one purpose, this feels like a bad practice. **What is the best practice for triggering multiple actions in one call from a component?**<issue_comment>username_1: For handling complex side effects in a redux application, I would recommend looking at using Redux Sagas. I have seen its usage grow in popularity on projects large and small, and for good reason. With sagas, in the example you have provided, you can emit a single action from a function provided through `mapDispatchToProps` and let a saga take care of the rest. For example: (following example assumes flux standard actions) ``` //import redux connect, react, etc class SiteSelector extends React.Component { render() { const id = this.props.id; return ( this.props.action(id)>Click Me ) } } const mapStateToProps = (state) => ({ id: state.websiteId }) const mapDispatchToProps = dispatch => ({ action: (id) => dispatch(setSelectedWebsite(id)) }) export connect(mapStateToProps, mapDispatchToProps)(SiteSelector) ``` Now you can handle the action emitted from `setSelectedWebsite` in a saga like so: ``` //import all saga dependencies, etc export function* selectSite(action) { const id = action.payload.id yield put(Actions.selectWebsite(id)) const results = yield call(Api.fetchAllAccounts, id) yield //do something complex with results yield //do something else... yield //and keep going... } // Our watcher Saga: spawn a new selectSite task every time the action from setSelectedWebsite is dispatched export function* watchForSiteSelection() { yield takeEvery('SITE_SELECTED_ACTION', selectSite) } ``` For reference checkout the docs: [Redux Sagas](https://redux-saga.js.org/) Upvotes: 0 <issue_comment>username_2: You could use [redux-thunk](https://github.com/gaearon/redux-thunk). Your component's method: ``` setSelectedWebsite(websiteId){ this.props.handleSetSelectedWebsite(websiteId) // this is a thunk } ``` Your Redux file with action creators / thunks: ``` // this function is a thunk export const handleSetSelectedWebsite = (websiteId) => (dispatch) => { dispatch(setSelectedWebsite(websiteId)) dispatch(fetchAllAccounts(websiteId)) } // these function are action creators (or could be other thunks if you style them the same way as the thunk above) const setSelectedWebsite = (websiteId) => { // your code } const fetchAllAccounts = (websiteId) => { // your code } ``` Upvotes: 3 [selected_answer]
2018/03/17
1,463
4,804
<issue_start>username_0: I have to solve an exercise in C. The exercise asks me to get 2 arrays (a, b) from input and see if there is a value of a[] which is less than all the values of b. Both array have 3 elements. The code I wrote is the following: ``` for (int i = 0; i < 3; ++i) { while(k<3) { if(a[i] ``` The problem with the code is that it prints false in any input I give. Any help would be appreciated. P.S. I left out the scanning from keyboard of the arrays and the declaration of i and k to keep the code short and clearer.<issue_comment>username_1: To check if there is a value in a that is less than all the values of b, you can use a double for loop: ``` for (int i = 0; i < 3; ++i) { int count= 0; for (int j = 0; j < 3; ++j) { if(a[i] ``` --- Another approach, which is faster, is to first find the smallest of `a` and then check for each `b` if this is smaller: ``` int smallest= a[0]; for (int i = 1; i < 3; ++i) if (a[i] ``` The first version has a nested loop, and so has n\*n iterations. The second has only n+n iterations (also called O(n2) and O(n+n)). Upvotes: 0 <issue_comment>username_2: For starters do not use magic numbers like 3. Instead use named constants. In this statement ``` if(a[i] ``` you are comparing elements of the same array `a` instead of comparing an element of `a` with an element of the array `b`. Also before the while loop you have to set the variables `count` and `k` to 0. ``` for (int i = 0; i < 3; ++i) { k = 0; count = 0; while(k<3) ``` And the break statement breaks the while loop but it does not break the outer for loop. The code does not determine the position of the target element of the array `a` that is less than all elements of the array `b`. You could write a separate function that does the task. Here is a demonstrative program. ``` #include size\_t find\_less\_than( const int a[], const int b[], size\_t n ) { size\_t i = 0; for ( \_Bool found = 0; !found && i < n; i += !found ) { size\_t j = 0; while ( j < n && a[i] < b[j] ) j++; found = j == n; } return i; } int main(void) { enum { N = 3 }; int a[N], b[N]; printf( "Enter %d values of the array a: ", N ); for ( size\_t i = 0; i < N; i++ ) scanf( "%d", &a[i] ); printf( "Enter %d values of the array b: ", N ); for ( size\_t i = 0; i < N; i++ ) scanf( "%d", &b[i] ); size\_t i = find\_less\_than( a, b, N ); if ( i != N ) { printf( "The element at position %zu with the value %d of the array a\n" "is less than all elements of the array b\n", i, a[i] ); } else { puts( "There is no element in the array a\n" "that is less than all elements of the array b\n" ); } return 0; } ``` Its output might look like ``` Enter 3 values of the array a: 3 0 1 Enter 3 values of the array b: 1 2 3 The element at position 1 with the value 0 of the array a is less than all elements of the array b ``` Upvotes: 2 <issue_comment>username_3: Writing programs is mechanics. Of course, it seems challenging at first, but it rapidly becomes easier; you only need to translate the description of a solution ("first do this, then do that") into a programming language. The real art is coming up with good solutions. A good solution does not waste time or space, for example. And it also scales to larger problems. In this problem, the obvious solution is to take each element in turn from `a` and see if it is less than every element of `b`. In the worst case, that will involve comparing every element of `a` with every element of `b`. That doesn't matter much if they both only have three elements, but suppose they had a million elements. Then that procedure could end up comparing every element of `a` with every element of `b`, a total of 1,000,000,000,000 comparisons. Even on modern hardware, that would take a long time. But there is an easy improvement. If an element from `a` is not less than any element of `b`, then it is not less than the smallest element of `b`. Conversely, if it is less than every element of `b`, then it is obviously less than the smallest element of `b`. So it is not necessary to compare with every element of `b`; only with the smallest element of `b`. We don't initially know what that element is, but it is straight-forward to scan `b` once to find it, and then use it in a scan of `a`. That lets us solve the million element problem with at most 1,999,999 comparisons, which I think you will agree is a lot more practical. Sometimes nested loops are necessary. But when you see one, you should always at least ask yourself, "is there a better solution?" Because the art and joy of programming isn't the mechanical translation of algorithm into code; it is the eureka moment in which that better solution reveals itself to you. Upvotes: 1
2018/03/17
1,512
5,787
<issue_start>username_0: How many CPU cycles are "lost" by adding an extra function call in Java? Are there compiler options available that transform many small functions into one big function in order to optimize performance? E.g. ``` void foo() { bar1() bar2() } void bar1() { a(); b(); } void bar2() { c(); d(); } ``` Would become: ``` void foo() { a(); b(); c(); d(); } ```<issue_comment>username_1: > > How many cpu cycles are "lost" by adding an extra function call in Java? > > > This depends on whether it is inlined or not. If it's inline it will be nothing (or a notional amount) If it is not compiled at runtime, it hardly matters because the cost of interperting is more important than a micro optimisation, and it is likely to be not called enough to matter (which is why it wasn't optimised) The only time it really matters is when the code is called often, however for some reason it is prevented from being optimised. I would only assume this is the case because you have a profiler telling you this is a performance issue, and in this case manual inlining might be the answer. I designed, develop and optimise latency sensitive code in Java and I choose to manually inline methods much less than 1% of time, but only after a profiler e.g. Flight Recorder suggests there is a significant performance problem. > > In the rare event it matters, how much difference does it make? > > > I would estimate between 0.03 and 0.1 micros-seconds in real applications for each extra call, in a micro-benchmark it would be far less. > > Are there compiler options available that transform many small functions into one big function in order to optimize performance? > > > Yes, in fact what could happen is not only are all these method inlined, but the methods which call them are inlined as well and none of them matter at runtime, but only if the code is called enough to be optimised. i.e. not only is `a`,`b`, `c` and `d` inlined and their code but `foo` is inlined as well. By *default* the Oracle JVM can line to a depth of 9 levels (until the code gets more than 325 bytes of byte code) > > Will clean code help performance > > > The JVM runtime optimiser has common patterns it optimises for. Clean, simple code is generally easier to optimise and when you try something tricky or not obvious, you can end up being much slower. If it harder to understand for a human, there is a good chance it's hard for the optimiser to understand/optimise. Upvotes: 2 <issue_comment>username_2: Runtime behavior and cleanliness of code (a compile time or life time property of code) belong to different requirement categories. There might be cases where optimizing for one category is detrimental to the other. The question is: which category really needs you attention? In my view cleanliness of code (or malleability of software) suffers from a huge lack of attention. You should focus on that first. And only if other requirements start to fall behind (e.g. performance) you inquire as to whether that's due to how clean the code is. That means you need to really compare, you need to measure the difference it makes. With regard to performance use a profiler of your choice: run the "dirty" code variant and the clean variant and check the difference. Is it markedly? Only if the "dirty" variant is significantly faster should you lower the cleanliness. Upvotes: 2 <issue_comment>username_3: Consider the following piece of code, which compares a code that does 3 things in one for loop to another that has 3 different for loops for each task. ``` @Test public void singleLoopVsMultiple() { for (int j = 0; j < 5; j++) { //single loop int x = 0, y = 0, z = 0; long l = System.currentTimeMillis(); for (int i = 0; i < 100000000; i++) { x++; y++; z++; } l = System.currentTimeMillis() - l; //multiple loops doing the same thing int a = 0, b = 0, c = 0; long m = System.currentTimeMillis(); for (int i = 0; i < 100000000; i++) { a++; } for (int i = 0; i < 100000000; i++) { b++; } for (int i = 0; i < 100000000; i++) { c++; } m = System.currentTimeMillis() - m; System.out.println(String.format("%d,%d", l, m)); } } ``` When I run it, here is the output I get for time in milliseconds. ``` 6,5 8,0 0,0 0,0 0,0 ``` After a few runs, JVM is able to identify hotspots of intensive code and optimises parts of the code to make them significantly faster. In our previous example, after 2 runs, the JVM had already optimised the code so much that the discussion around for-loops became redundant. Unless we know what's happening inside, we cannot predict the performance implications of changes like introduction of for-loops. The only way to actually improve the performance of a system is by measuring it and focusing only on fixing the actual bottlenecks. There is a chance that cleaning your code may make it faster for the JVM. But even if that is not the case, every performance optimisation, comes with added code complexity. Ask yourself whether the added complexity is worth the future maintenance effort. After all, the most expensive resource on any team is the developer, not the servers, and any additional complexity slows the developer, adding to the project cost. The way to deal it is to figure out your benchmarks, what kind of application you're making, what are the bottlenecks. If you're making a web-app, perhaps the DB is taking most of the time, and reducing the number of functions will not make a difference. On the other hand, if its an app running on a system where performance is everything, every small thing counts. Upvotes: 0
2018/03/17
800
2,874
<issue_start>username_0: Rails server will not start after install of paperclip. I have this error message in the console: ``` undefined method 'has_attached_file'. ``` In my Gemfile ``` gem "paperclip", :git => "http://github.com/thoughtbot/paperclip.git" ``` I tried this in my config/environments/development.rb ``` config.gem "paperclip" ``` and this in my config/application.rb ``` Paperclip::Railtie.insert ``` I have ImageMagik installed on my computer (I'm on windows 10). Any clue ? I had a message in the console after install of paperclip, saying that paperclip is now compatible with aws-sdk >= 2.0.0. So I also set the latest aws-sdk-ruby from github... EDIT : also tried this after getting the path with the command 'which convert' ``` Paperclip.options[:command_path] = "/c/Program Files/ImageMagick-7.0.7-Q16/convert" ```<issue_comment>username_1: First thing I'd change is to use a version number instead of getting the package directly from GitHub. in `Gemfile`: **Change** ``` gem "paperclip", :git => "http://github.com/thoughtbot/paperclip.git" ``` **To** ``` gem "paperclip", "~> 6.0" ``` Make sure to run the `rails generate paperclip` command (example: `rails generate paperclip photo image`) to add the needed `attachment` field to your database model schema. Once you have the `attachment` field you can use `has_attached_file` to mount paperclip to that field (example: `has_attached_file :image`). And don't forget to restart your server. P.S: There's no need to use `config.gem "paperclip"` and `Paperclip::Railtie.insert`, I wasn't able to find any mention of them in the GitHub Paperclip documentation so I'm sure they are discontinued now as they were used in pre Rails 4 applications. *P.P.S: I would strongly recommend you dual boot to a Unix operating system (be it a Linux distribution or macOS) or use an online IDE such as Cloud9 instead of using Windows. It's simply a bad idea and while working on your project you want to replicate to the smallest detail your production environment.* Upvotes: 1 <issue_comment>username_2: Try to add ``` include Paperclip::Glue ``` to your Model. Upvotes: 0 <issue_comment>username_3: I'm all set now. Apparently there was a problem with the database construction. For the rest, everything is well explained on github <https://github.com/thoughtbot/paperclip> but 1) I was following a tutorial with a few steps missing and 2) there was a specific problem for windows. Dropping all the tables and migrating again from scratch seems to have solved the very basic problem which caused the error. Windows users, pay attention to your path setting of the file.exe which has nothing to do with ImageMagik. You can place this file.exe anywhere you want, in my case ``` Paperclip.options[:command_path] = 'C:\Sites\utils\GnuWin32\bin' ``` Thanks anyway! Upvotes: 0
2018/03/17
1,024
3,194
<issue_start>username_0: I've already read [Cannot install distribute: pypi.python.org rejecting http](https://stackoverflow.com/questions/46962577/cannot-install-distribute-pypi-python-org-rejecting-http) and [Getting error 403 while installing package with pip](https://stackoverflow.com/questions/46967488/getting-error-403-while-installing-package-with-pip) but the work around are not workig while trying to do `pip install --upgrade pyls --index-url=https://pypi.python.org/simple` I got : ``` Collecting pyls Using cached pyls-0.1.3.zip Complete output from command python setup.py egg_info: Downloading http://pypi.python.org/packages/source/d/distribute/distribute-0.6.14.tar.gz Traceback (most recent call last): File "", line 1, in File "/tmp/pip-build-PHBndM/pyls/setup.py", line 31, in main() File "/tmp/pip-build-PHBndM/pyls/setup.py", line 11, in main distribute\_setup.use\_setuptools() File "setup\_utils/distribute\_setup.py", line 145, in use\_setuptools return \_do\_download(version, download\_base, to\_dir, download\_delay) File "setup\_utils/distribute\_setup.py", line 124, in \_do\_download to\_dir, download\_delay) File "setup\_utils/distribute\_setup.py", line 193, in download\_setuptools src = urlopen(url) File "/usr/lib/python2.7/urllib2.py", line 154, in urlopen return opener.open(url, data, timeout) File "/usr/lib/python2.7/urllib2.py", line 435, in open response = meth(req, response) File "/usr/lib/python2.7/urllib2.py", line 548, in http\_response 'http', request, response, code, msg, hdrs) File "/usr/lib/python2.7/urllib2.py", line 473, in error return self.\_call\_chain(\*args) File "/usr/lib/python2.7/urllib2.py", line 407, in \_call\_chain result = func(\*args) File "/usr/lib/python2.7/urllib2.py", line 556, in http\_error\_default raise HTTPError(req.get\_full\_url(), code, msg, hdrs, fp) urllib2.HTTPError: HTTP Error 403: SSL is required ---------------------------------------- Command "python setup.py egg\_info" failed with error code 1 in /tmp/pip-build-hdpX4y/pyls/ ``` my python version is 2.7, and my pip version is `pip 9.0.2 from /usr/local/lib/python2.7/dist-packages (python 2.7)` to reproduce : spin this docker instance `https://hub.docker.com/r/allansimon/allan-docker-dev-python` and launch the command in it (with user `vagrant` )<issue_comment>username_1: I tried it worked for me: ``` pip install --upgrade pyls -i https://pypi.python.org/simple ``` Upvotes: -1 <issue_comment>username_2: I was also attempting to upgrade `pyls` instead of `python-language-server` and I hit the same error. Doing this instead worked for me: ``` pip install --upgrade python-language-server ``` Source: <https://github.com/palantir/python-language-server/issues/395> Upvotes: 0 <issue_comment>username_3: I encountered similar issue when jython executing ez\_setup.py. It turns out I just had to replace http with https in one of the urls in the script. Upvotes: 1 <issue_comment>username_4: In case this might help anyone else...I ran into this issue recently and resolved it by simply updating the pip installer to the latest version: ``` pip install -U pip ``` Upvotes: 1
2018/03/17
1,365
4,646
<issue_start>username_0: I've a raspberry pi on my network with an LED strip attached to it. My purpose is to create a jar file that will sit on the pi, monitor system events such as logins and load average, and drive the LED based on the those inputs. To continuosly monitor the logged in users, I am trying to use akka actors. Using the examples provided [here](https://doc.akka.io/docs/akka/current/scheduler.html), this is what I've gotten so far : ``` import com.pi4j.io.gpio.GpioFactory import com.pi4j.io.gpio.RaspiPin import sys.process._ import akka.actor.{Actor, Props, ActorSystem} import scala.concurrent.duration._ val who :String = "who".!! class Blinker extends Actor { private def gpio = GpioFactory.getInstance private def led = gpio.provisionDigitalOutputPin(RaspiPin.GPIO_08) def receive = { case x if who.contains("pi") => led.blink(250) case x if who.contains("moocow") => println("falalalala") } val blinker = system.actorOf(Props(classOf[Blinker], this)) val cancellable = system.scheduler.schedule( 0 milliseconds, 50 milliseconds, blinker, who) } ``` However, `system` is not recognised by my IDE (IntelliJ) and it says, `cannot resolve symbol` I also have a main object like this: ``` object ledStrip { def main(args: Array[String]): Unit = { val blink = new Blinker // blink.receive } } ``` In `main`, I'm not quite sure how to initialise the application. Needless to say, this my first time writing a scala program Help? **Edit::** Here is the updated program after incorporating what Michal has said ``` class Blinker extends Actor { val who: String = "who".!! private val gpio = GpioFactory.getInstance private val led = gpio.provisionDigitalOutputPin(RaspiPin.GPIO_08) def receive = { case x if who.contains("pi") => led.blink(250) case x if who.contains("moocow") => println("falalalala") } val system = ActorSystem() } object ledStrip extends Blinker { def main(args: Array[String]): Unit = { val blinker = system.actorOf(Props(classOf[Blinker], this)) import system.dispatcher val cancellable = system.scheduler.schedule( 50 milliseconds, 5000 milliseconds, blinker, who) } } ``` This program compiles fine, but throws the following error upon execution: > > Exception in thread "main" java.lang.ExceptionInInitializerError at > ledStrip.main(ledStrip.scala) Caused by: > akka.actor.ActorInitializationException: You cannot create an instance > of [ledStrip$] explicitly using the constructor (new). You have to use > one of the 'actorOf' factory methods to create a new actor. See the > documentation. at > akka.actor.ActorInitializationException$.apply(Actor.scala:194) at > akka.actor.Actor.$init$(Actor.scala:472) at > Blinker.(ledStrip.scala:15) at > ledStrip$.(ledStrip.scala:34) at > ledStrip$.(ledStrip.scala) ... 1 more > > > **Edit 2** Code that compiles and runs (behaviour is still not as desired)< `blink(1500)` is never executed when user: `pi` logs out from the shell> ``` object sysUser { val who: String = "who".!! } class Blinker extends Actor { private val gpio = GpioFactory.getInstance private val led = gpio.provisionDigitalOutputPin(RaspiPin.GPIO_08) def receive = { case x if x.toString.contains("pi") => led.blink(50) case x if x.toString.contains("moocow") => println("falalalala") case _ => led.blink(1500) } } object ledStrip { def main(args: Array[String]): Unit = { val system = ActorSystem() val blinker = system.actorOf(Props[Blinker], "blinker") import system.dispatcher val cancellable = system.scheduler.schedule( 50 milliseconds, 5000 milliseconds, blinker, sysUser.who) } } ```<issue_comment>username_1: I tried it worked for me: ``` pip install --upgrade pyls -i https://pypi.python.org/simple ``` Upvotes: -1 <issue_comment>username_2: I was also attempting to upgrade `pyls` instead of `python-language-server` and I hit the same error. Doing this instead worked for me: ``` pip install --upgrade python-language-server ``` Source: <https://github.com/palantir/python-language-server/issues/395> Upvotes: 0 <issue_comment>username_3: I encountered similar issue when jython executing ez\_setup.py. It turns out I just had to replace http with https in one of the urls in the script. Upvotes: 1 <issue_comment>username_4: In case this might help anyone else...I ran into this issue recently and resolved it by simply updating the pip installer to the latest version: ``` pip install -U pip ``` Upvotes: 1
2018/03/17
318
1,205
<issue_start>username_0: I am building an Android application. On the front page I am displaying a random background image. This means that the width and height is dynamic. The problem is that the background image is being stretched. I am displaying it in the LinearLayout with the code ``` android:background="@drawable/img_salad_thumb". ``` Here is a screenshot of the app: [![This is how my app looks](https://i.stack.imgur.com/GdcGm.png)](https://i.stack.imgur.com/GdcGm.png) As you can see the background image loading behind the logo is very stretched… ``` xml version="1.0" encoding="utf-8"? ```<issue_comment>username_1: hi,Setting the attribute 'android:scaleType' in ImageView may help you. You can get more information about 'scaleType' here [ImageView.ScaleType | Android Developers](https://developer.android.com/reference/android/widget/ImageView.ScaleType.html) Upvotes: -1 <issue_comment>username_2: because you set image in main linear layout have math\_parent in both of layout\_width and layout\_height ... actually your image fill all the screen in background and you just see a part of it . update your code like this ``` enter code her ``` Upvotes: 3 [selected_answer]
2018/03/17
1,094
3,613
<issue_start>username_0: I set animation BOUNCE for marker. When i click button hide to hide marker. After i click button show to show marker, it has not animation before. ``` var latlng = new google.maps.LatLng(21.0394475,105.7540192); var marker; function initialize(){ var mapProp = { center:latlng, zoom:14, mapTypeId:google.maps.MapTypeId.ROADMAP }; map = new google.maps.Map(document.getElementById("map"),mapProp); marker = new google.maps.Marker({ icon: "http://maps.google.com/mapfiles/ms/micons/red.png", map: map, draggable: true, position: latlng, animation : google.maps.Animation.BOUNCE }); } google.maps.event.addDomListener(window, 'load', initialize); $(function(){ $('#btn1').click(function(){ marker.setVisible(false); console.log('hide animating = ' + marker.animating); console.log('hide animation = ' + marker.animation); }); $('#btn2').click(function(){ marker.setVisible(true); console.log('show animating = ' + marker.animating); console.log('show animation = ' + marker.animation); }); }) Hide show ``` I debug marker has attribute animation : [![Wath animation marker](https://i.stack.imgur.com/qFHYt.png)](https://i.stack.imgur.com/qFHYt.png) And : [![Show log animation marker](https://i.stack.imgur.com/pXoVd.png)](https://i.stack.imgur.com/pXoVd.png) I can set animation : **google.maps.Animation.BOUNCE** again in function click of button show. But i want to know why marker lost animation after click button ?<issue_comment>username_1: For the button that re-activates the marker you could explicitly call the `setAnimation` method ``` $('#btn2').click(function(){ marker.setVisible( true ); marker.setAnimation( google.maps.Animation.BOUNCE ); console.log('show animating = ' + marker.animating); console.log('show animation = ' + marker.animation); }); ``` Upvotes: 0 <issue_comment>username_2: The `visible` property doesn't seem to change the animation, although toggling it does stop animation. If you want to use the existing animation property, you can restart the animation with: ``` $('#btn2').click(function() { marker.setVisible(true); if (marker.getAnimation() != null) { marker.setAnimation(marker.getAnimation()); } }); ``` [proof of concept fiddle](https://jsfiddle.net/username_2/ov7hyqbm/1/) **code snippet:** ```js function initialize() { var mapProp = { center: latlng, zoom: 14, mapTypeId: google.maps.MapTypeId.ROADMAP }; map = new google.maps.Map(document.getElementById("map"), mapProp); marker = new google.maps.Marker({ icon: "http://maps.google.com/mapfiles/ms/micons/red.png", map: map, draggable: true, position: latlng, animation: google.maps.Animation.BOUNCE }); } $(function() { $('#btn1').click(function() { marker.setVisible(false); console.log('hide animating = ' + marker.animating); console.log('hide animation = ' + marker.animation); }); $('#btn2').click(function() { marker.setVisible(true); console.log(marker.getAnimation()); if (marker.getAnimation() != null) { marker.setAnimation(marker.getAnimation()); } console.log('show animating = ' + marker.animating); console.log('show animation = ' + marker.animation); }); }) google.maps.event.addDomListener(window, 'load', initialize); var latlng = new google.maps.LatLng(21.0394475, 105.7540192); var marker; ``` ```css html, body, #map { height: 100%; width: 100%; margin: 0px; padding: 0px } ``` ```html Hide show ``` Upvotes: 1
2018/03/17
985
3,096
<issue_start>username_0: So I had this c windows program that writes to a data file with this code. ``` fprintf(fo,"\xbf%06d",num) ; ``` It worked fine but at some Chinese computers it behaves differently. I made a little test program in C and compiled it in borland C and mingw. ``` #include void main(void) { int i = 0 ; unsigned char b[100] ; sprintf(b,"\xbf%d",12345) ; printf("\n%s\n",b) ; while (b[i]) printf(" %02X",b[i++]) ; printf("\n") ; } ``` on my computer the output is : ``` ┐12345 BF 31 32 33 34 35 ``` But if I let my Chinese client test it on his computer it worked differently for the Borland version: The output is: ``` ?d BF 25 64 ``` Apparently the xbf and ? are combined to one Chinese character. In China the output of the program compiled with mingw is: ``` ?2345 BF 31 32 33 34 35 ``` Here the format is parsed one byte at a time. Which one is the correct behaviour? How do other C/C++ compilers handle this?<issue_comment>username_1: For the button that re-activates the marker you could explicitly call the `setAnimation` method ``` $('#btn2').click(function(){ marker.setVisible( true ); marker.setAnimation( google.maps.Animation.BOUNCE ); console.log('show animating = ' + marker.animating); console.log('show animation = ' + marker.animation); }); ``` Upvotes: 0 <issue_comment>username_2: The `visible` property doesn't seem to change the animation, although toggling it does stop animation. If you want to use the existing animation property, you can restart the animation with: ``` $('#btn2').click(function() { marker.setVisible(true); if (marker.getAnimation() != null) { marker.setAnimation(marker.getAnimation()); } }); ``` [proof of concept fiddle](https://jsfiddle.net/username_2/ov7hyqbm/1/) **code snippet:** ```js function initialize() { var mapProp = { center: latlng, zoom: 14, mapTypeId: google.maps.MapTypeId.ROADMAP }; map = new google.maps.Map(document.getElementById("map"), mapProp); marker = new google.maps.Marker({ icon: "http://maps.google.com/mapfiles/ms/micons/red.png", map: map, draggable: true, position: latlng, animation: google.maps.Animation.BOUNCE }); } $(function() { $('#btn1').click(function() { marker.setVisible(false); console.log('hide animating = ' + marker.animating); console.log('hide animation = ' + marker.animation); }); $('#btn2').click(function() { marker.setVisible(true); console.log(marker.getAnimation()); if (marker.getAnimation() != null) { marker.setAnimation(marker.getAnimation()); } console.log('show animating = ' + marker.animating); console.log('show animation = ' + marker.animation); }); }) google.maps.event.addDomListener(window, 'load', initialize); var latlng = new google.maps.LatLng(21.0394475, 105.7540192); var marker; ``` ```css html, body, #map { height: 100%; width: 100%; margin: 0px; padding: 0px } ``` ```html Hide show ``` Upvotes: 1
2018/03/17
1,229
4,520
<issue_start>username_0: I load image from drawable, it usually show black image if I enable proguard If I build debug (disable proguard) everything look good. Here is my config in proguard: ``` -keep public class * implements com.bumptech.glide.module.GlideModule -keep public class * extends com.bumptech.glide.module.AppGlideModule -keep public enum com.bumptech.glide.load.ImageHeaderParser$** { **[] $VALUES; public *; } ``` Here is my code (glide 4.6.1): Load image: ``` Apploader.load("",holder.imageView,MyUtils.getDrawableRessourceID(item.getFlag().toLowerCase(Locale.ENGLISH).substring(0, 2))); public static void load(Object uri, ImageView img, int drawable) { RequestOptions options = RequestOptions.centerInsideTransform().placeholder(new ColorDrawable(ContextCompat.getColor(context, R.color.gray))).error(drawable).diskCacheStrategy(DiskCacheStrategy.RESOURCE); Glide.with(context).load(uri) .apply(options) .thumbnail(0.5f) .into(img); } ```<issue_comment>username_1: don't forget to include glide dependency in proguard rules add this below line of code in your proguard-rules.pro file ``` -keep public class * implements com.bumptech.glide.module.GlideModule -keep public class * extends com.bumptech.glide.module.AppGlideModule -keep public enum com.bumptech.glide.load.ImageHeaderParser$** { **[] $VALUES; public *; } # Uncomment for DexGuard only #-keepresourcexmlelements manifest/application/meta-data@value=GlideModule ``` It works without any doubt or You Try this ``` -keep public class * implements com.bumptech.glide.module.GlideModule -keep public enum com.bumptech.glide.load.resource.bitmap.ImageHeaderParser$** { **[] $VALUES; public *; } ``` And in code ``` GlideApp .with(context) .load(url) .centerCrop() .placeholder(R.drawable.loading_spinner) .into(myImageView); ``` Upvotes: 2 <issue_comment>username_2: why you don't use GlideApp like this ``` public static void load(Object uri, ImageView img, int drawable) { GlideApp.with(context) .load(uri) .placeholder(drawable) .diskCacheStrategy(DiskCacheStrategy.RESOURCE) .into(img); } ``` and just need add new class for AppGlideModule like this : ``` import com.bumptech.glide.annotation.GlideModule import com.bumptech.glide.module.AppGlideModule @GlideModule class AppGlideModule : AppGlideModule() ``` don't forget to rebuild project after create your AppGlideModule Upvotes: 0 <issue_comment>username_3: I found the reason of this issue: I used sinkResource in build release so that android studio can't detect some picture I use by name in my project. I need remove sinkresource or keep drawable folder in proguard Upvotes: 2 [selected_answer]<issue_comment>username_4: it is not because of your proguard file, but when you mention `shrinkResources true` in release build type, than while generating application in release mode it will shrink the unused resources (or says resources that aren't mention in java code). so, keep `shrinkResources false` in release type as: ``` release { shrinkResources false minifyEnabled true zipAlignEnabled true proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro' debuggable false } ``` Upvotes: 2 <issue_comment>username_5: So I've been trying to solve the issue without using `shrinkResources false` because I want to use it but keep specific files. In my case I was using `.json` to get image names and was using local images to show. But `resource shrinker` thought that those images are unused and deleted all unused images even if I use them by calling `resources.getIdentifier()`. [Doc](https://developer.android.com/studio/build/shrink-code#strict-reference-checks) says that > > When you use `resources.getIdentifier()`, the resource shrinker behaves defensively by default and marks all resources with a matching name format as potentially used and unavailable for removal. > > > But I was using `.json` to get image names as an object, so there wasn't any `String` like `"img_%1d"`. That's why `resource shrinker` thought that my images are unused. **To solve the issue**, you can create `keep.xml` and define resources that you want to keep like below; ``` xml version="1.0" encoding="utf-8"? ``` Use comma to add more file and use asterisk character as a wild card. [Detailed info](https://developer.android.com/studio/build/shrink-code#shrink-resources) Upvotes: 2
2018/03/17
1,272
4,142
<issue_start>username_0: I have input message from customer for ex: ``` "This is example message with bad words like FWORD" ``` I have pattern that allows me to check this message, but I want output like this: ``` "This is example message with bad words like F***D" ``` I just want to replace all letters with starts except first and last character. How can I do that? Thanks in advance.<issue_comment>username_1: String has a method called .replaceAll(String regex, String replacement). You can simply call .replaceAll for the words that you want to censor. Example: ``` stringToEdit.replaceAll("FWORD","F***D"); ``` If you have multiple words you wish to censor, I would suggest keeping multiple regex-es, one for each word in a Map, where the key is the regex and the value is the censored version of the word, or the other way around. You can then go through each of the map entries and use replaceAll on the string. ``` public static void main (String args[]){ String toCensor="you can add the dirty words here"; String censorRegex1="([cC]?)+([hH])+[uU]+[jJ]+([uU+]?)"; String censorRegex2="[kK]+[Uu]+[rR]+[wW]+[aA]+"; Map censorMap=new HashMap(); censorMap.put(censorRegex1,"c\*\*\*u"); censorMap.put(censorRegex2,"k\*\*\*a"); for(Map.Entry e:censorMap.entrySet()){ toCensor=toCensor.replaceAll(e.getKey(),e.getValue()); } System.out.println(toCensor); } ``` Upvotes: 0 <issue_comment>username_2: I recommend to use a regexp because it may be compiled once making an efficient finite state machine able to do substitution in one pass. ``` String censored_re_str = "(?i)(b)adwor(d)|(v)erybadwor(d)|(y)etmorebadwor(d)"; Pattern censored = Pattern.compile(censored_re_str); censored.matcher(string_to_replace).replaceAll("$1***$2"); ``` Upvotes: 0 <issue_comment>username_3: Perhaps you could split the string by a whitespace, loop the words and per word check if it equals the censored word. If it does, use replace with a regex like: [`\\B\\w\\B`](https://regex101.com/r/WLdtkC/1) That would match * `\B` Assert position where `\b` not matches * `\w` Word character (Update to your requirements if you want to match more) * `\B` Assert position where `\b` not matches As an example: ``` String censored = "FWORD"; String str = "This is example message with bad words like FWORD "; String[] words = str.split(" "); for (int i = 0; i < words.length; i++) { if (words[i].equals(censored)) { words[i] = words[i].replaceAll("\\B\\w\\B", "*"); } } System.out.println(String.join(" ", words)); ``` That would give you > > This is example message with bad words like F\*\*\*D > > > [Demo Java](http://rextester.com/JSIUL62259) If you change `censored` to "message", this will result in: > > This is e\*\*\*\*\*e message with bad words like FWORD > > > If you want to list multiple bad words, you could for example create an array with bad words and loop through that array an check each word like [this Demo](http://rextester.com/QXV39945). Thay way your result could look like: > > This is e\*\*\*\*\*e message with bad words like F\*\*\*D > > > Upvotes: 1 <issue_comment>username_4: Try this (with apache.commons.lang3 lib) ``` import java.util.HashMap; import java.util.Map; import org.apache.commons.lang3.StringUtils; public class FilterApp { public static void main(String[] args) { // Define your filter String[] filterArray = new String[]{"FWORD"}; // Build a Map with replacements Map filterMap = new HashMap<>(); for( String f: filterArray){ String replacement = f.charAt(0)+StringUtils.leftPad("\*", (f.length()-2), '\*')+f.charAt(f.length()-1); // (?i) means 'CASE\_INSENSITIVE' and \\b means 'word boundry' filterMap.put("(?i)\\b"+f+"\\b", replacement); } // Let's do it.... String result = "This is example message with bad words like FWORD"; for (Map.Entry filter : filterMap.entrySet()) { result = result.replaceAll(filter.getKey(), filter.getValue()); } // .. and here is the result System.out.println(result); } } ``` Upvotes: 0
2018/03/17
1,677
5,234
<issue_start>username_0: I have the following 2 list mapped to each-other : ``` Values =[1,2,3,4,5,6,8,9,10,9,7,6,5.50,5,6,7,8,10,12,15,14 ,13.50 12] Dates =[Day1,Day2,Day3,Day4,Day5,Day6,Day8,Day9,Day10,Day11,..,Day20] ``` [![enter image description here](https://i.stack.imgur.com/7MF2x.png)](https://i.stack.imgur.com/7MF2x.png)As you can see in the screenshot I want to write a program that will loop over the liste Values and return me the values of ``` XA= [X,A] AB= [A,B] BC=[B,C] CD= [C,D] DE=[D,E] ``` the challenge is that i dont know if the list will have only for Values from X to D or it will continue till N I have try with the below code but the result is not giving me what i want because I need to figure out a way to store the values before continue with the process: ``` for Prs in close: # getung the high if Prs > SwingHigh: SwingHigh = Prs Swinglow=Prs elif Prs < Swinglow: Swinglow = Prs PrevHigh=SwingHigh else: PrevLow=Swinglow impulSizee = SwingHigh - InicialPrice retrSize = SwingHigh - Swinglow # geting the index if the lows low print('-------------------------------------') print('theprice testing',Prs) print('the starting price is InicialPrice ',InicialPrice) print('the swing low is PrevLow ',PrevLow) print('the swing hige is PrevHigh',PrevHigh) print('the new high SwingHigh ',SwingHigh) print('the new low Swinglow------ ',Swinglow) ```<issue_comment>username_1: String has a method called .replaceAll(String regex, String replacement). You can simply call .replaceAll for the words that you want to censor. Example: ``` stringToEdit.replaceAll("FWORD","F***D"); ``` If you have multiple words you wish to censor, I would suggest keeping multiple regex-es, one for each word in a Map, where the key is the regex and the value is the censored version of the word, or the other way around. You can then go through each of the map entries and use replaceAll on the string. ``` public static void main (String args[]){ String toCensor="you can add the dirty words here"; String censorRegex1="([cC]?)+([hH])+[uU]+[jJ]+([uU+]?)"; String censorRegex2="[kK]+[Uu]+[rR]+[wW]+[aA]+"; Map censorMap=new HashMap(); censorMap.put(censorRegex1,"c\*\*\*u"); censorMap.put(censorRegex2,"k\*\*\*a"); for(Map.Entry e:censorMap.entrySet()){ toCensor=toCensor.replaceAll(e.getKey(),e.getValue()); } System.out.println(toCensor); } ``` Upvotes: 0 <issue_comment>username_2: I recommend to use a regexp because it may be compiled once making an efficient finite state machine able to do substitution in one pass. ``` String censored_re_str = "(?i)(b)adwor(d)|(v)erybadwor(d)|(y)etmorebadwor(d)"; Pattern censored = Pattern.compile(censored_re_str); censored.matcher(string_to_replace).replaceAll("$1***$2"); ``` Upvotes: 0 <issue_comment>username_3: Perhaps you could split the string by a whitespace, loop the words and per word check if it equals the censored word. If it does, use replace with a regex like: [`\\B\\w\\B`](https://regex101.com/r/WLdtkC/1) That would match * `\B` Assert position where `\b` not matches * `\w` Word character (Update to your requirements if you want to match more) * `\B` Assert position where `\b` not matches As an example: ``` String censored = "FWORD"; String str = "This is example message with bad words like FWORD "; String[] words = str.split(" "); for (int i = 0; i < words.length; i++) { if (words[i].equals(censored)) { words[i] = words[i].replaceAll("\\B\\w\\B", "*"); } } System.out.println(String.join(" ", words)); ``` That would give you > > This is example message with bad words like F\*\*\*D > > > [Demo Java](http://rextester.com/JSIUL62259) If you change `censored` to "message", this will result in: > > This is e\*\*\*\*\*e message with bad words like FWORD > > > If you want to list multiple bad words, you could for example create an array with bad words and loop through that array an check each word like [this Demo](http://rextester.com/QXV39945). Thay way your result could look like: > > This is e\*\*\*\*\*e message with bad words like F\*\*\*D > > > Upvotes: 1 <issue_comment>username_4: Try this (with apache.commons.lang3 lib) ``` import java.util.HashMap; import java.util.Map; import org.apache.commons.lang3.StringUtils; public class FilterApp { public static void main(String[] args) { // Define your filter String[] filterArray = new String[]{"FWORD"}; // Build a Map with replacements Map filterMap = new HashMap<>(); for( String f: filterArray){ String replacement = f.charAt(0)+StringUtils.leftPad("\*", (f.length()-2), '\*')+f.charAt(f.length()-1); // (?i) means 'CASE\_INSENSITIVE' and \\b means 'word boundry' filterMap.put("(?i)\\b"+f+"\\b", replacement); } // Let's do it.... String result = "This is example message with bad words like FWORD"; for (Map.Entry filter : filterMap.entrySet()) { result = result.replaceAll(filter.getKey(), filter.getValue()); } // .. and here is the result System.out.println(result); } } ``` Upvotes: 0