date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2018/03/17
645
1,988
<issue_start>username_0: `jq` is suppose to > > process/filter JSON inputs and producing the filter's results ***as JSON*** > > > However, I found that after the `jq` process/filter, output result is no longer in JSON format any more. E.g., <https://stedolan.github.io/jq/tutorial/#result5>, i.e., ``` $ curl -s 'https://api.github.com/repos/stedolan/jq/commits?per_page=5' | jq '.[] | {message: .commit.message, name: .commit.committer.name}' { "message": "Merge pull request #162 from stedolan/utf8-fixes\n\nUtf8 fixes. Closes #161", "name": "<NAME>" } { "message": "Reject all overlong UTF8 sequences.", "name": "<NAME>" } . . . ``` Is there any workaround? **UPDATE:** How to wrap the whole return into a json structure of: ``` { "Commits": [ {...}, {...}, {...} ] } ``` I've tried: ``` jq '.[] | Commits: [{message: .commit.message, name: .commit.committer.name}]' jq 'Commits: [.[] | {message: .commit.message, name: .commit.committer.name}]' ``` but neither works.<issue_comment>username_1: Found it, on the same page, <https://stedolan.github.io/jq/tutorial/#result6> > > If you want to get the output as a single array, you can tell jq to “collect” all of the answers by wrapping the filter in square brackets: > > > ``` jq '[.[] | {message: .commit.message, name: .commit.committer.name}]' ``` Upvotes: 4 <issue_comment>username_2: Technically speaking, unless otherwise instructed (notably with the `-r` command-line option), jq produces a *stream* of JSON entities. One way to convert an input stream of JSON entities into a JSON array containing them is to use the `-s` command-line option. Response to UPDATE ------------------ To produce a JSON object of the form: ``` { "Commits": [ {...}, {...}, {...} ] } ``` you could write something like: ``` jq '{Commits: [.[] | {message: .commit.message, name: .commit.committer.name}]}' ``` (jq understands the '{Commits: \_}' shorthand.) Upvotes: 4 [selected_answer]
2018/03/17
1,608
6,464
<issue_start>username_0: What is the easiest and shortest method to determine the current location of the device in Android? An approximate, city-level location is good enough, and I need the Lat/Long coordinates, not the name. Think of weather location. But I'm using it for sunrise/sunset calculations. All other examples here on StackOverflow and the web are at least a page long and rely on callbacks. But I'm in the middle of a watch face draw function and can't wait for callbacks. I just want to get the last known location that is certainly already lying around somewhere in memory. One answer pointed me to `LocationManager.getLastKnownLocation()` but Android Studio says it doesn't know that method. So it's probably outdated and removed from more recent APIs. I'm new to Android development and don't have a clue where to search. Web snippets brought me here but this time no further.<issue_comment>username_1: > > But I'm in the middle of a watch face draw function and can't wait for callbacks. > > > Frequently, you don't have a choice. > > I just want to get the last known location that is certainly already lying around somewhere in memory. > > > Frequently, it is not lying around in memory. > > One answer pointed me to LocationManager.getLastKnownLocation() but Android Studio says it doesn't know that method. > > > That method exists, as is seen in [the documentation for that method](https://developer.android.com/reference/android/location/LocationManager.html#getLastKnownLocation(java.lang.String)). Since your question does not have the code that you are trying, it is difficult for anyone to help explain why that code is not compiling. Perhaps you are trying to call `getLastKnownLocation()` on the `LocationManager` class, rather than an instance of `LocationManager`. Bear in mind that `getLastKnownLocation()` frequently returns `null`, as the device usually is not actively tracking the user location. Also, the user may have disabled location access, or the device might be in airplane mode (and so has no access to non-GPS sources), or the device might be in a facility that leads to crappy signals (e.g., large parking structures), etc. You could also use the Play Services' [fused location provider](https://developer.android.com/training/location/retrieve-current.html), but it requires a callback. Upvotes: 2 <issue_comment>username_2: add these permissions to your manifest.xml implement LocationListener in your activity, a method from this class will give you updates on the android devices location as it changes ``` @Override public void onLocationChanged(Location location) { } ``` call this startGettingLocations method in your oncreate method of the activity so that all permissions needed can be requested from the user ``` private void startGettingLocations() { Log.i("Process", "start getting locations"); LocationManager lm = (LocationManager) getSystemService(Context.LOCATION_SERVICE); boolean isGPS = lm.isProviderEnabled(LocationManager.GPS_PROVIDER); boolean isNetwork = lm.isProviderEnabled(LocationManager.NETWORK_PROVIDER); boolean canGetLocation = true; int ALL_PERMISSIONS_RESULT = 101; long MIN_DISTANCE_CHANGE_FOR_UPDATES = 100;// Distance in meters long MIN_TIME_BW_UPDATES = 1000 * 10 * 6 ;// Time in milliseconds ArrayList permissions = new ArrayList<>(); ArrayList permissionsToRequest; permissions.add(Manifest.permission.ACCESS\_FINE\_LOCATION); permissions.add(Manifest.permission.ACCESS\_COARSE\_LOCATION); permissionsToRequest = findUnAskedPermissions(permissions); //Check if GPS and Network are on, if not asks the user to turn on if (!isGPS && !isNetwork) { showSettingsAlert(); } else { // check permissions if (Build.VERSION.SDK\_INT >= Build.VERSION\_CODES.M) { if (permissionsToRequest.size() > 0) { requestPermissions(permissionsToRequest.toArray(new String[permissionsToRequest.size()]), ALL\_PERMISSIONS\_RESULT); canGetLocation = false; } } } if(ActivityCompat.checkSelfPermission(this, android.Manifest.permission.ACCESS\_FINE\_LOCATION) != PackageManager.PERMISSION\_GRANTED){ Log.i("Permissions", "fine loc not granted"); } else{ Log.i("Permissions", "fine loc granted"); } if(ActivityCompat.checkSelfPermission(this, android.Manifest.permission.ACCESS\_COARSE\_LOCATION) != PackageManager.PERMISSION\_GRANTED){ Log.i("Permissions", "coarse loc not granted"); } else{ Log.i("Permissions", "coarse loc granted"); } if (ActivityCompat.checkSelfPermission(this, android.Manifest.permission.ACCESS\_FINE\_LOCATION) != PackageManager.PERMISSION\_GRANTED && ActivityCompat.checkSelfPermission(this, android.Manifest.permission.ACCESS\_COARSE\_LOCATION) != PackageManager.PERMISSION\_GRANTED) { Toast.makeText(this, "Permission not Granted", Toast.LENGTH\_SHORT).show(); return; } //Starts requesting location updates if (canGetLocation) { if (isGPS) { lm.requestLocationUpdates( LocationManager.GPS\_PROVIDER, MIN\_TIME\_BW\_UPDATES, MIN\_DISTANCE\_CHANGE\_FOR\_UPDATES, this); } else if (isNetwork) { // from Network Provider lm.requestLocationUpdates( LocationManager.NETWORK\_PROVIDER, MIN\_TIME\_BW\_UPDATES, MIN\_DISTANCE\_CHANGE\_FOR\_UPDATES, this); } } else{ Toast.makeText(this, "Can't get location", Toast.LENGTH\_SHORT).show(); } } ``` Upvotes: -1 <issue_comment>username_3: Being in the middle of a watch face draw function or not should not make a difference. Most of the snippets you will find on the web will indeed tell you to use callbacks. Not because it's "ok" in some cases, but because it **is** how the system works. On a high level : ----------------- You'd let the system know that you're interested in the location of the device. --- The system then starts computing information. --- When a position has been determined, and then every-time the location changes significantly enough (as defined via accuracy), the system will let you know what the actual position of the device now is (through these callbacks). My recommendation : ------------------- Start implementing this logic when the watch face renderer is created. Have the option to display a "location not known" (as @username_1 mentioned, location can be unavailable for a bunch of different reasons). And have your proper way (eventually) of storing the last know location (SharedPref sounds like a good option). Upvotes: 1
2018/03/17
361
1,460
<issue_start>username_0: I need to use win2D in my cppwinrt project. Using the Win2D sample file as a model I’ve tried to mimic its setup, putting in the package file for win2d from that project, duplicating the custom build step, but I can’t get it to compile. I remember some talk about Win2D being added soon to the standard cppwinrt headers so that it could be accessed like others of these wonderful cppwinrt interfaces. Is this something that might happen soon? Or does anyone have some really specific instructions for using Win2D in the current Visual Studio 2017 environment? I’m in the Insider preview program but haven’t seen anything come up. Thanks.<issue_comment>username_1: Once you have [downloaded Win2D](https://www.nuget.org/packages/Win2D.uwp) and assuming you have the latest RS4 Windows SDK installed, you can simply use cppwinrt to generate the Win2D headers for consumption: ``` cppwinrt -in lib\uap10.0\Microsoft.Graphics.Canvas.winmd -ref sdk ``` This will generate a winrt folder that contains the headers that you need to include in order to use Win2D. You will then need to add the parent folder to your include paths and ensure that the Win2D DLL is copied to your package or build output folder. Upvotes: 4 [selected_answer]<issue_comment>username_2: As of the recent update, using Newget to install Win2D into a C++/WinRT project will automatically generate the headers and install them in "GeneratedFiles\winrt\" Upvotes: 3
2018/03/17
668
2,292
<issue_start>username_0: I am trying to run a subquery with a condition that returns a running total. However, I am receiving the following error: > > Only one expression can be specified in the select list when the subquery is not introduced with EXISTS. > > > Is there any way this code can be salvaged? Please be aware this code is part of a larger script that executes perfectly. The reason I need to keep it in this format is because it is the "missing piece", for lack of a better word. ``` SELECT A.[WeekEnding], ( SELECT SUM(A.[Weekly Sales Units]), A.[Description], A.[WeekEnding] FROM [FACT_SALES_HISTORY] A INNER JOIN [DIM_DATE] B ON A.WeekEnding = B.[WeekEnding] WHERE B.[YA Latest 1 Week] = 1 GROUP BY A.[Description], A.[WeekEnding] ) AS 'YA Units' FROM [FACT_SALES_HISTORY] A LEFT JOIN [DIM_DATE] B ON A.WeekEnding = B.[WeekEnding] ``` The output data, from the code, would look like the following: ``` [Weekly Sales Units]) A.[Description] A.[WeekEnding] 24 Item One 03-10-2010 55 Item Two 03-10-2010 79 Item One 03-10-2010 98 Item Five 03-10-2010 11 Item Five 03-10-2010 ```<issue_comment>username_1: t looks like your subquery on its own would provide the sample data and the outer query is trying to sum that up by weekending. If that is the case, then the whole thing could be replaced with this: ``` SELECT A.[WeekEnding], SUM(A.[Weekly Sales Units]) [YA Units] FROM [FACT_SALES_HISTORY] A INNER JOIN [DIM_DATE] B ON A.WeekEnding = B.[WeekEnding] WHERE B.[YA Latest 1 Week] = 1 GROUP BY A.[WeekEnding] ``` Upvotes: -1 <issue_comment>username_2: You can't select three different items in your subquery and then use an AS assignment. You could split that into two separate queries and then union them. ``` SELECT SUM(A.[Weekly Sales Units]), A.[Description], A.[WeekEnding] FROM [FACT_SALES_HISTORY] A INNER JOIN [DIM_DATE] B ON A.WeekEnding = B.[WeekEnding] WHERE B.[YA Latest 1 Week] = 1 GROUP BY A.[Description], A.[WeekEnding] UNION ALL -- This will only union distinct columns SELECT A.[WeekEnding]... FROM [FACT\_SALES\_HISTORY] A ``` Upvotes: 1
2018/03/17
477
1,547
<issue_start>username_0: I have a list of e-mail ids among which I have to select only those which do not have **ruba.com** as domain name with regex. For examples, if I have <EMAIL>, <EMAIL> and <EMAIL>, then my regular expression should select first two Ids. What should be the regular expression for this problem? I have tried with two expressions: **[a-zA-Z0-9\_.+-]+@[^(ruba)]+.[a-zA-Z0-9-.]+** and **[a-zA-Z0-9\_.+-]+@[^r][^u][^b][^a]+.[a-zA-Z0-9-.]+** None of the above two was able to fulfill my requirement.<issue_comment>username_1: t looks like your subquery on its own would provide the sample data and the outer query is trying to sum that up by weekending. If that is the case, then the whole thing could be replaced with this: ``` SELECT A.[WeekEnding], SUM(A.[Weekly Sales Units]) [YA Units] FROM [FACT_SALES_HISTORY] A INNER JOIN [DIM_DATE] B ON A.WeekEnding = B.[WeekEnding] WHERE B.[YA Latest 1 Week] = 1 GROUP BY A.[WeekEnding] ``` Upvotes: -1 <issue_comment>username_2: You can't select three different items in your subquery and then use an AS assignment. You could split that into two separate queries and then union them. ``` SELECT SUM(A.[Weekly Sales Units]), A.[Description], A.[WeekEnding] FROM [FACT_SALES_HISTORY] A INNER JOIN [DIM_DATE] B ON A.WeekEnding = B.[WeekEnding] WHERE B.[YA Latest 1 Week] = 1 GROUP BY A.[Description], A.[WeekEnding] UNION ALL -- This will only union distinct columns SELECT A.[WeekEnding]... FROM [FACT\_SALES\_HISTORY] A ``` Upvotes: 1
2018/03/17
679
1,679
<issue_start>username_0: Using pandas IndexSlice, is it possible to use a list of integers? I get KeyError: 'the label [xxxx] is not in the [columns]' when I use a list of integers (even when the values in the multiIndex level are formatted as strings): ``` vals = np.random.randn(4) df = pd.DataFrame({'l1': ['A', 'B', 'C', 'B'], 'l2': ['9876', '6789', '5432', '1234'], 'l3': ['Y', 'X', 'Y', 'Y'], 'value': vals}) df.set_index(['l1', 'l2', 'l3'], inplace=True) idx = pd.IndexSlice # None of the following works df.loc[idx[:, 6789, :]] df.loc[idx[:, [6789, 1234], :]] df.reset_index(inplace=True) df.l2 = df.l2.astype('str') df.set_index(['l1', 'l2', 'l3'], inplace=True) df.loc[idx[:, '6789', :]] ```<issue_comment>username_1: For starters, your index columns are made up of strings, there's no chance of that working. To slice out a single value, use the idiomatic `xs`: ``` df.xs('6789', level='l2') value l1 l3 B X -1.955361 ``` For a list of values, specify an `axis` parameter to `loc`; ``` df.loc(axis=0)[idx[:, ['6789', '1234'], :]] value l1 l2 l3 B 6789 X -1.955361 1234 Y 0.703208 ``` Note this also works identically to `xs` for a scalar; ``` df.loc(axis=0)[idx[:, '6789', :]] value l1 l2 l3 B 6789 X -1.955361 ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Alternative option: ``` In [76]: df.loc[pd.IndexSlice[:, '6789', :], :] Out[76]: value l1 l2 l3 B 6789 X 1.306962 ``` PS pay attention at string value `'6789'` and at the last `:` in the: ``` df.loc[pd.IndexSlice[...], :] # NOTE: ----> ^ ``` Upvotes: 1
2018/03/17
2,868
9,521
<issue_start>username_0: I am having trouble converting/comparing dates extracted from a calendar to the current date I get a red line in eclipse under my if statement My goal is to evaluate the current date against the dates in the rows/cell and select/click the button beside the row/cell. however to get to the second part i need to correctly evaluate the 1st part which is the dates. my code is below: ``` for (WebElement pd: payDates) { LocalDate currentDate = LocalDate.now(); java.util.Date d = new SimpleDateFormat("yyyy-MM-dd").parse(currentDate.toString()); if (pd >= (d)) { driver.findElement(By.xpath("//tr[starts-with(@id,'changeStartWeekGrid_row_')and not(starts-with(@id,'changeStartWeekGrid_row_column'))]/td[5]/span'" + reqIndex + "])/TBODY[@id='changeStartWeekGrid_rows_tbody']/TR[7]/TD[1]/DIV[1]/DIV[1]/DIV[1]")).click(); PS_OBJ_CycleData.donebtn(driver).click(); break; } else { reqIndex++; PS_OBJ_CycleData.Nextbtn(driver).click(); } } } while (reqIndex < 7); /// do this 7 times; ------------ this works for me thanks for all your help ----------------- int reqIndex = 0; dowhileloop: do { //List payDates = driver.findElements(By.xpath("//table[@id='changeStartWeekGrid\_rows\_table']//tr[position()>1]/td[position()=5]")); List < WebElement > payDates = driver.findElements(By.xpath("//tr[starts-with(@id,'changeStartWeekGrid\_row\_')and not(starts-with(@id,'changeStartWeekGrid\_row\_column'))]/td[5]/span")); // 5th colunm date print test List < String > texts = payDates.stream().map(WebElement::getText).collect(Collectors.toList()); System.out.println("date ->" + texts); //\*\* Begin third inner for-loop\*\*\*\* for (WebElement pd: payDates) { // begin inner for loop SimpleDateFormat dateFormat = new SimpleDateFormat("dd/MM/yyyy"); Date payDate = dateFormat.parse(pd.getText()); System.out.println("sample-> " + dateFormat.format(payDate)); if (payDate.after(new Date())) { System.out.println("inside for loop"); driver.findElement(By.xpath("//tr[starts-with(@id,'changeStartWeekGrid\_row\_')and not(starts-with(@id,'changeStartWeekGrid\_row\_column'))]/td[5]/span'" + reqIndex + "])/TBODY[@id='changeStartWeekGrid\_rows\_tbody']/TR[7]/TD[1]/DIV[1]/DIV[1]/DIV[1]")).click(); PS\_OBJ\_CycleData.donebtn(driver).click(); break dowhileloop; } } //\*\* END third inner for-loop\*\*\*\* reqIndex++; PS\_OBJ\_CycleData.Nextbtn(driver).click(); Thread.sleep(5000); ```<issue_comment>username_1: Convert pd to date (or better, its epoch as long (date is deprecated)) and do d.compare instead of >= Upvotes: 2 [selected_answer]<issue_comment>username_2: tl;dr ===== * **Avoid legacy** date-time classes (`Date`, `Calendar`, `SimpleDateFormat`). Use only *java.time* classes. * **Convert/parse** your mysterious input data as a `LocalDate`. * **Compare** by calling `isBefore`, `isAfter`, and `isEqual` methods. Avoid legacy date-time classes ============================== Firstly, do not mix legacy date-time classes with the modern *java.time* classes. Avoid ever using `java.util.Date`, `Calendar`, `SimpleDateFormat`, and any other date-time class not found in the `java.time` package. Those old classes are a wretched mess, entirely supplanted by the *java.time* framework. Good riddance. ISO 8601 ======== Your input format of YYYY-MM-DD is defined by the standard [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601). The *java.time* classes use these standard formats by default when parsing/generating strings. So no need to specify a formatting pattern. ``` LocalDate.parse( "2018-01-23" ) ``` When exchanging date-time values as text, always use [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) formats. Avoid other formats such as `MM/dd/yyyy`. Current date ============ > > LocalDate.now() > > > When getting the current date, always pass the desired/expected time zone. Otherwise you are implicitly on the JVM’s current default password. That default can be changed at any moment by any code of any app within the JVM. A time zone is crucial in determining a date. For any given moment, the date varies around the globe by zone. For example, a few minutes after midnight in [Paris France](https://en.wikipedia.org/wiki/Europe/Paris) is a new day while still “yesterday” in [Montréal Québec](https://en.wikipedia.org/wiki/America/Montreal). If no time zone is specified, the JVM implicitly applies its current default time zone. That default may change at any moment, so your results may vary. Better to specify your desired/expected time zone explicitly as an argument. Specify a [proper time zone name](https://en.wikipedia.org/wiki/List_of_tz_zones_by_name) in the format of `continent/region`, such as [`America/Montreal`](https://en.wikipedia.org/wiki/America/Montreal), [`Africa/Casablanca`](https://en.wikipedia.org/wiki/Africa/Casablanca), or `Pacific/Auckland`. Never use the 3-4 letter abbreviation such as `EST` or `IST` as they are *not* true time zones, not standardized, and not even unique(!). ``` ZoneId z = ZoneId.of( "America/Montreal" ) ; LocalDate today = LocalDate.now( z ) ; ``` If you want to use the JVM’s current default time zone, ask for it and pass as an argument. If omitted, the JVM’s current default is applied implicitly. Better to be explicit, as the default may be changed at any moment *during runtime* by any code in any thread of any app within the JVM. ``` ZoneId z = ZoneId.systemDefault() ; // Get JVM’s current default time zone. ``` Convert your input to java.time =============================== Your Question cannot be answered fully until you explain *exactly* what is in your input, the `pd` object. Is that `pd` a string? If so, what kind of text is in there? Search Stack Overflow for `DateTimeFormatter` class to find many examples and discussions of parsing text to date-time objects using java.time. If your input is a string such as `01/23/2017`, define a formatting pattern to match. ``` String input = "01/23/2017" ; DateTimeFormatter f = DateTimeFormatter.ofPattern( "MM/dd/uuuu" ) ; // Define a custom format to match input. LocalDate ld = LocalDate.parse( input , f ) ; ``` As mentioned above, always use standard [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) formats rather than custom formats such as this when exchanging date-time values as text. Comparing ========= To compare [`LocalDate`](https://docs.oracle.com/javase/9/docs/api/java/time/LocalDate.html) objects, call the methods `isBefore`, `isAfter`, and `isEqual`. ``` boolean isFuture = someLocalDate.isAfter( today ) ; ``` --- About *java.time* ================= The [*java.time*](http://docs.oracle.com/javase/9/docs/api/java/time/package-summary.html) framework is built into Java 8 and later. These classes supplant the troublesome old [legacy](https://en.wikipedia.org/wiki/Legacy_system) date-time classes such as [`java.util.Date`](https://docs.oracle.com/javase/9/docs/api/java/util/Date.html), [`Calendar`](https://docs.oracle.com/javase/9/docs/api/java/util/Calendar.html), & [`SimpleDateFormat`](http://docs.oracle.com/javase/9/docs/api/java/text/SimpleDateFormat.html). The [*Joda-Time*](http://www.joda.org/joda-time/) project, now in [maintenance mode](https://en.wikipedia.org/wiki/Maintenance_mode), advises migration to the [java.time](http://docs.oracle.com/javase/9/docs/api/java/time/package-summary.html) classes. To learn more, see the [*Oracle Tutorial*](http://docs.oracle.com/javase/tutorial/datetime/TOC.html). And search Stack Overflow for many examples and explanations. Specification is [JSR 310](https://jcp.org/en/jsr/detail?id=310). You may exchange *java.time* objects directly with your database. Use a [JDBC driver](https://en.wikipedia.org/wiki/JDBC_driver) compliant with [JDBC 4.2](http://openjdk.java.net/jeps/170) or later. No need for strings, no need for `java.sql.*` classes. Where to obtain the java.time classes? * [**Java SE 8**](https://en.wikipedia.org/wiki/Java_version_history#Java_SE_8), [**Java SE 9**](https://en.wikipedia.org/wiki/Java_version_history#Java_SE_9), and later + Built-in. + Part of the standard Java API with a bundled implementation. + Java 9 adds some minor features and fixes. * [**Java SE 6**](https://en.wikipedia.org/wiki/Java_version_history#Java_SE_6) and [**Java SE 7**](https://en.wikipedia.org/wiki/Java_version_history#Java_SE_7) + Much of the java.time functionality is back-ported to Java 6 & 7 in [***ThreeTen-Backport***](http://www.threeten.org/threetenbp/). * [**Android**](https://en.wikipedia.org/wiki/Android_(operating_system)) + Later versions of Android bundle implementations of the java.time classes. + For earlier Android (<26), the [***ThreeTenABP***](https://github.com/JakeWharton/ThreeTenABP) project adapts *ThreeTen-Backport* (mentioned above). See [*How to use ThreeTenABP…*](http://stackoverflow.com/q/38922754/642706). The [**ThreeTen-Extra**](http://www.threeten.org/threeten-extra/) project extends java.time with additional classes. This project is a proving ground for possible future additions to java.time. You may find some useful classes here such as [`Interval`](http://www.threeten.org/threeten-extra/apidocs/org/threeten/extra/Interval.html), [`YearWeek`](http://www.threeten.org/threeten-extra/apidocs/org/threeten/extra/YearWeek.html), [`YearQuarter`](http://www.threeten.org/threeten-extra/apidocs/org/threeten/extra/YearQuarter.html), and [more](http://www.threeten.org/threeten-extra/apidocs/index.html). Upvotes: 2
2018/03/17
7,046
20,649
<issue_start>username_0: I noticed that `margin: 0px` has no effect. If you look on the screenshot, then you can see that the margin is still there even though it was set to 0 for left and right. [![enter image description here](https://i.stack.imgur.com/mkYsO.png)](https://i.stack.imgur.com/mkYsO.png) First I checked if any other class or id is doing this, but it is not. What is going on? I even tried it with `!important` but it made no difference. ```css html, body { margin:0; padding:0; height:100%; } .header { background: black; height: 60px; position: fixed; z-index: 1; width: 100%; } .footer { position: fixed; color: white; text-align: center; padding-top: 10px; background-color: black; width: 100%; bottom: 0px; padding-bottom: 10px; } .main { overflow-y: scroll; padding-top: 120px; padding-bottom: 120px; } .content { position: relative; min-height: 100%; } .containerFooter { margin-left: auto; margin-right: auto; font-size: 11px; } .innerContainer { padding-left: 6%; } .carObject { display: inline-block; border: 2px solid #ffdbdb; width: 380px; height: 512px; } .imagesWrap { margin-top: 50px; min-height: 250px; min-width: 352px; } .image { /*width: 444px;*/ height: 280px; margin:10px; padding-bottom:10px; } .car_image { display: inline; min-height: 400px; } .carObjectEdit { border: 2px solid #ffdbdb; min-width: 300px; width: 380px; } .carTitleContainer { font-family: fantasy; padding-top: 0px; padding-left: 0px; height: 24px; /*min-height:40px;*/ } .close { position: absolute; margin-right:4px; margin-top:4px; } .close:hover { color: red; } .edit { position: absolute; margin-right:4px; margin-top:4px; } .edit:hover { cursor:pointer; color: red; } .carProperties { font-size: 12px !important; font-family: arial; } .detailsDescription { min-height: 100px; font-size: 12px !important; text-align: center; border: 1px #d8d8d8 solid; padding-top: 10px; padding-left: 10px; padding-right: 10px; } p.detailsDescription { margin-bottom: 0px; } table.details tr td { min-width: 80px; text-align: center; } table.details tr td:nth-child(odd) { font-weight: 600; text-align: right; } /* --------------Full Responsive Design--------------------*/ .FilterContainer{ overflow-x:auto; margin-top: 150px; width: 740px; border: 1px outset #FFF; -webkit-box-shadow: 10px 11px 43px 7px rgba(0,0,0,0.6); -moz-box-shadow: 10px 11px 43px 7px rgba(0,0,0,0.6); box-shadow: 10px 11px 43px 7px rgba(0,0,0,0.6); padding: 10px; } .createContainer { width: 900px; margin-left: auto; margin-right: auto; } @media (max-width:1600px) and (min-width:1200px) { .createContainer { width: 800px; } } @media (max-width:1200px) and (min-width:769x) { .createContainer { width: 400px; } } @media (max-width:769x) and (min-width:360px) { .createContainer { width: 270px; } } @media (max-width:360px) { .createContainer { width: 225px; } } /*-Login-----------------*/ .navbar-form { margin-right: 80px; transform: scale(0.85); padding-top: 4px; } @media (max-width:1850px) and (min-width:1600px) { .navbar-form { width: auto; padding-top: 0; padding-bottom: 0; margin-right: 320px; margin-left: 0; border: 0; -webkit-box-shadow: none; box-shadow: none; } } @media (max-width:1600px) and (min-width:1500px) { .navbar-form { padding-top: 0; padding-bottom: 0; margin-right: 600px; margin-left: 0; border: 0; -webkit-box-shadow: none; box-shadow: none; } } @media (max-width:1500px) and (min-width:1400px) { .navbar-form { padding-top: 0; padding-bottom: 0; margin-right: 650px; margin-left: 0; border: 0; -webkit-box-shadow: none; box-shadow: none; } } @media (max-width:1400px) and (min-width:1300px) { .navbar-form { padding-top: 0; padding-bottom: 0; margin-right: 675px; margin-left: 0; border: 0; -webkit-box-shadow: none; box-shadow: none; } } @media (max-width:1300px) and (min-width:1200px) { .navbar-form { padding-top: 0; padding-bottom: 0; margin-right: 750px; margin-left: 0; border: 0; -webkit-box-shadow: none; box-shadow: none; } } @media (max-width:1200px) and (min-width:1100px) { .navbar-form { padding-top: 0; padding-bottom: 0; margin-right: 800px; margin-left: 0; border: 0; -webkit-box-shadow: none; box-shadow: none; } } @media (max-width:1100px) and (min-width:600px) { .navbar-form { padding-top: 0; padding-bottom: 0; margin-right: 1000px; transform:scale(0.75); margin-left: 0; border: 0; -webkit-box-shadow: none; box-shadow: none; } } @media (max-width:1000px) and (min-width:800px) { .navbar-form { padding-top: 0; padding-bottom: 0; margin-right: 1080px; transform:scale(0.75); margin-left: 0; border: 0; -webkit-box-shadow: none; box-shadow: none; } } @media (max-width:800px) and (min-width:600px) { .navbar-form { padding-top: 0; padding-bottom: 0; margin-right: 1100px; transform:scale(0.75); margin-left: 0; border: 0; -webkit-box-shadow: none; box-shadow: none; } .FooterButton { font-size: 14px; } } @media (max-width:600px) and (min-width:300px) { .navbar-form { padding-top: 0; padding-bottom: 0; margin-right: 1100px; transform:scale(0.75); margin-left: 0; border: 0; -webkit-box-shadow: none; box-shadow: none; } .FooterButton { font-size: 13px; } } @media (max-width:300px) { .navbar-form { padding-top: 0; padding-bottom: 0; margin-right: 1100px; transform:scale(0.75); margin-left: 0; border: 0; -webkit-box-shadow: none; box-shadow: none; } .FooterButton { font-size: 12px; } } /* On smaller screens, where height is less than 450px, change the style of the sidenav (less padding and a smaller font size) */ @media screen and (max-height: 450px) { .sidenav {padding-top: 15px;} .sidenav a {font-size: 18px;} } /* ENDE --------------Full Responsive Design--------------------*/ /* ------------------ diverses ----------------------*/ .error { padding:10px; color: red; } .errorLog { color: red; } a.animatedLink { position: relative; text-decoration: none; } a.animatedLink:hover { text-decoration: none; } a.animatedLink:hover::after, a.animatedLink:focus::after { opacity: 1; transform: translateY(-4px); } a.animatedLink::after { position: absolute; top: 100%; left: 0; width: 100%; height: 2px; background: white; content: ''; opacity: 0; transition: opacity 0.3s, transform 0.3s; transform: translateY(8px); } /* ------------------ constantin --------------------*/ .ListContainer{ margin-top: 120px; width: 850px; border: 1px outset #FFF; -webkit-box-shadow: 10px 11px 43px 7px rgba(0,0,0,0.6); -moz-box-shadow: 10px 11px 43px 7px rgba(0,0,0,0.6); box-shadow: 10px 11px 43px 7px rgba(0,0,0,0.6); padding: 10px; } .DetailContainer{ margin-top: 120px; width: 1200px; border: 1px outset #FFF; -webkit-box-shadow: 10px 11px 43px 7px rgba(0,0,0,0.6); -moz-box-shadow: 10px 11px 43px 7px rgba(0,0,0,0.6); box-shadow: 10px 11px 43px 7px rgba(0,0,0,0.6); padding: 10px; } .KontaktContainer{ margin-top: 40px; width: 850px; height: 550px; border: 1px outset #FFF; -webkit-box-shadow: 10px 11px 43px 7px rgba(0,0,0,0.6); -moz-box-shadow: 10px 11px 43px 7px rgba(0,0,0,0.6); box-shadow: 10px 11px 43px 7px rgba(0,0,0,0.6); padding: 10px; } .InseratContainer{ margin-top: 120px; width: 740px; border: 1px outset #FFF; -webkit-box-shadow: 10px 11px 43px 7px rgba(0,0,0,0.6); -moz-box-shadow: 10px 11px 43px 7px rgba(0,0,0,0.6); box-shadow: 10px 11px 43px 7px rgba(0,0,0,0.6); padding: 10px; } .form-group{ padding: 7px; } label{ width: 150px; } select{ width: 220px !important; } input{ width: 220px !important; } .SubmitButton{ padding-top: 20px; text-align:center; } button{ font-size: 20px !important; } h3{ text-align:center; margin-bottom:30px; } .Login{ position: absolute; right: 7px; } .navbar-inverse{ height: 60px; } .navbar-brand{ position: absolute; left: 13px; top: 5px; font-size: 25px; color: white; } .navbar-brand:hover{ color: red; } .Menu{ position: absolute; left: 400px; top: 5px; font-size: 25px; } /* The side navigation menu */ .sidenav { color: white; margin-top: 60px; height: 100%; width: 0; position: fixed; z-index: 0; top: 0; right: 0; background-color: #2d2d2d; overflow-x: hidden; padding-top: 60px; transition: 0.5s; } /* The navigation menu links */ .sidenav a { padding: 8px 8px 8px 32px; text-decoration: none; font-size: 25px; color: #818181; display: block; transition: 0.3s } /* When you mouse over the navigation links, change their color */ .sidenav a:hover, .offcanvas a:focus{ color: #f1f1f1; } /* Position and style the close button (top right corner) */ .closebtn { position: absolute; top: 0; right: 10px; font-size: 36px; margin-left: 50px; color: white; background-color: #2d2d2d; border: 1px; } /* Style page content - use this if you want to push the page content to the right when you open the side navigation */ #main { transition: margin-left .5s; padding: 20px; } .MenuHide{ transform: scale(0.5); padding-left:50px; position:absolute; top: -3px; right:15px; margin: 15px; z-index: 500; } .MenuHide:hover{ cursor: pointer; } .btn{ text-align: left; } .modal-dialog-reg{ width: 268px !important; } .modal-dialog-anm{ width: 505px !important; } .modal-footer{ text-align: center; } .ListImg{ width: 250px; height: 150px; margin-right: 25px; } .ListView{ text-decoration: none; } .CarouselContainer{ width: 600px; height: 400px; } .DetailImg{ width: 600px; height: 380px; } .backendContainer { border:1px solid black; } .backendContainer img { height: 175px; border: none; } .deleteAreYouSure { width: 100%; margin: 4px; } .edit { width: 100%; margin: 4px; } .FooterButton{ color: #d4d4d4; font-size: 20px; text-decoration: none; margin-left: 20px; } .FooterButton:hover{ color: white; } .FooterTitle { margin: 0px; padding: 10px; } ``` ```html LaraPress [This is the header](http://localhost/-auto-shop/shop/public) ![](http://localhost/-auto-shop/shop/public/img/menu.png) Datenschutz ----------- Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Duis autem vel eum iriure dolor in hendrerit in vulputate velit esse molestie consequat, vel illum dolore eu feugiat nulla facilisis at vero eros et accumsan et iusto odio dignissim qui blandit praesent luptatum zzril delenit augue duis dolore te feugait nulla facilisi. Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet dolore magna aliquam erat volutpat. Ut wisi enim ad minim veniam, quis nostrud exerci tation ullamcorper suscipit lobortis nisl ut aliquip ex ea commodo consequat. Duis autem vel eum iriure dolor in hendrerit in vulputate velit esse molestie consequat, vel illum dolore eu feugiat nulla facilisis at vero eros et accumsan et iusto odio dignissim qui blandit praesent luptatum zzril delenit augue duis dolore te feugait nulla facilisi. Nam liber tempor cum soluta nobis eleifend option congue nihil imperdiet doming id quod mazim placerat facer possim assum. Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet dolore magna aliquam erat volutpat. Ut wisi enim ad minim veniam, quis nostrud exerci tation ullamcorper suscipit lobortis nisl ut aliquip ex ea commodo consequat. Duis autem vel eum iriure dolor in hendrerit in vulputate velit esse molestie consequat, vel illum dolore eu feugiat nulla facilisis. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, At accusam aliquyam diam diam dolore dolores duo eirmod eos erat, et nonumy sed tempor et et invidunt justo labore Stet clita ea et gubergren, kasd magna no rebum. sanctus sea sed takimata ut vero voluptua. est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat. Consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Duis autem vel eum iriure dolor in hendrerit in vulputate velit esse molestie consequat, vel illum dolore eu feugiat nulla facilisis at vero eros et accumsan et iusto odio dignissim qui blandit praesent luptatum zzril delenit augue duis dolore te feugait nulla facilisi. Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet dolore magna aliquam erat volutpat. Ut wisi enim ad minim veniam, quis nostrud exerci tation ullamcorper suscipit lobortis nisl ut aliquip ex ea commodo consequat. Duis autem vel eum iriure dolor in hendrerit in vulputate velit esse molestie consequat, vel illum dolore eu feugiat nulla facilisis at vero eros et accumsan et iusto odio dignissim qui blandit praesent luptatum zzril delenit augue duis dolore te feugait nulla facilisi. [Hans](http://localhost/-auto-shop/shop/public/user/showAccount) [Inserate](http://localhost/-auto-shop/shop/public/car/showAll) [Verkaufen](http://localhost/-auto-shop/shop/public/car/create) [Abmelden](http://localhost/-auto-shop/shop/public/logout) ### © 2017 [Impressum](http://localhost/-auto-shop/shop/public/impressum) [Datenschutz](http://localhost/-auto-shop/shop/public/datenschutz) [Haftungsausschluss](http://localhost/-auto-shop/shop/public/haftungsausschluss) [AGB](http://localhost/-auto-shop/shop/public/agb) [Kontakt](http://localhost/-auto-shop/shop/public/kontakt) ``` [Jsfiddle](https://jsfiddle.net/4bqxgeo2/11/) UPDATE: It is related with the weird @media settings. I noticed that it works if I uncheck each one of them: [![enter image description here](https://i.stack.imgur.com/VDtzj.png)](https://i.stack.imgur.com/VDtzj.png) Why is Bootstrap 4 designed like this and how can I disable this "feature"? EDIT: I "downgraded" to 3.3.7 again, no headache anymore...<issue_comment>username_1: Bootstrap4 uses flexbox. Try to set `display: inline-flex` - you can add the class: `.d-inline-flex` Upvotes: 0 <issue_comment>username_2: You're using `.m-0`, which applies `margin:0!important;` on `.container`, which uses left and right margin values of `auto`, combined with hard-coded `width`s, **to be centered**. If you want your `.container` to fill the entire screen, use `.container-fluid`. --- Ample note regarding use of `.m-0` and the likes: Unfortunately, Bootstrap chose a weird (if you ask me) way to develop a new system of imposing `margin`s and `padding`s in v4. It's weird (and wrong, IMHO) because it is way too powerful in terms of specificity. In short, it implies the use of `!important` for classes of type `m-*` and `p-*`. It also creates numerous problems when you need different margin/padding values applied responsively. As far as I'm concerned, it's a mistake and I personally chose not to use this system at all, in any of my projects. If you choose to use them, get used to using `!important` and at least one `.className` to apply your desired values over Bootstrap's selectors. The obvious alternative is to not use `.m-*` and `.p-*` classes at all and apply `margin`s and `padding`s as in previous versions of Bootstrap, without the need to use `!important`. --- **Note:** Posting images of code is quite wrong (on SO), for several reasons: * it denies indexing, so people with a similar error won't be able to find it by searching the error (when you're posting an error) * most times, it's a lot easier and faster to understand a code if you see the result and inspect it, which is not possible when you post an image of code Do consider replacing the images with the actual code or adding a [Minimal, Complete, and Verifiable example](https://stackoverflow.com/help/mcve) in your question, featuring your issue. Upvotes: 3 [selected_answer]
2018/03/17
1,063
3,867
<issue_start>username_0: I have a small data set with 47 samples. I'm running linear regression with 2 features. After running LinearRegression I ran Ridge (with sag). I would expect it to converge quickly, and return exactly the same prediction as computed solving the normal equations. But every time I run Ridge I get a different result, close to the result provided by LinearRegression but not exactly the same. It doesn't matter how many iterations I run. Is this expected? Why? In the past I've implemented regular gradient descent myself and it quickly converges in this data set. ``` ols = sklearn.linear_model.LinearRegression() model = ols.fit(x_train, y_train) print(model.predict([[1650,3]])) %[[ 293081.4643349]] scaler=preprocessing.StandardScaler().fit(x_train) ols = sklearn.linear_model.Ridge(alpha=0,solver="sag",max_iter=99999999,normalize=False) model = ols.fit(x_scaled, y_train) x_test=scaler.transform([[1650,3]]) print(model.predict(x_test)) %[[ 293057.69986594]] ```<issue_comment>username_1: The difference between your two different outputs may come from your preprocessing that you only do for the Ridge regression :`scaler=preprocessing.StandardScaler().fit(x_train)`. By doing such normalization you change the representation of your data and it may lead to different results. Note also the fact that by doing [OLS](https://en.wikipedia.org/wiki/Ordinary_least_squares) you penalize the L2 norm looking only at the output differences (expected vs predicted) while [Ridge](https://en.wikipedia.org/wiki/Tikhonov_regularization) the algorithm's also taking into account the input match or mismatch Upvotes: 0 <issue_comment>username_2: Thank you all for your answers! After reading @sascha response I read a little bit more on Stochastic Average Gradient Descent and I think I've found the reason of this discrepancy and it seems in fact that it's due to the "stochastic" part of the algorithm. Please check the wikipedia page: <https://en.wikipedia.org/wiki/Stochastic_gradient_descent> In regular gradient descent we update the weights on every iteration based on this formula: ![gradient descent](https://wikimedia.org/api/rest_v1/media/math/render/svg/4895d44c0572fb2988f2f335c28cc055a7f75fa0) Where the second term of the sum is the gradient of the cost function multiplied by a learning rate mu. This is repeated until convergence, and it always gives the same result after the same number of iterations, given the same starting weights. In Stochastic Gradient Descent this is done instead in every iteration: ![stochastic gradient descent](https://wikimedia.org/api/rest_v1/media/math/render/svg/dda3670f8a8996a0d3bf80856bb4a166cc8db6d4) Where the second part of the sum is the gradient in **a single sample** (multiplied by the learning rate mu). All the samples are randomized at the beginning, and then the algorithm cycles through them at every iteration. So I think a couple of things contribute to the behavior I asked about: (EDITED see replies below) 1. The point used to calculate the gradient at every iteration changes every time I re-run the fit function. That's why I don't obtain the same result every time. (EDIT)(This can be made deterministic by using **random\_state** when calling the fit method) 1. I also realized that the number of iterations that the algorithm runs varies between 10 and 15 (regardless of the max\_limit I set). I couldn't find anywhere what the criteria for convergence is in scikit, but my guess is that if I could tighten it (i.e. run for more iterations) the answer I would get would be much closer to the LinearRegression method. (EDIT)(Convergence criteria depends on **tol** (precision of the solution). By modifying this parameter (I set it to 1e-100) I was able to obtain the same solution as the one reported by LinearRegression) Upvotes: 2 [selected_answer]
2018/03/17
579
1,813
<issue_start>username_0: I have a data.frame that has 15 columns and looks like the following: ``` Word Syllable TimeStart TimeEnd Duration PitchMin PitchMax TimePitchMin Einen "aI 0.00 0.11 0.11 98.173 106.158 0.053 Einen n@n 0.11 0.24 0.13 106.158 123.176 0.110 TimePitchMax PitchSlope IntenMax IntenMin TimeIntenMax TimeIntenMin PitchAccent 0.110 140.443 83.794 82.583 0.095 0.051 no 0.210 169.359 83.875 80.458 0.210 0.234 no ``` I want to save the data into a .txt file. But when I use standard write.table(table, "outfile.txt") method the result looks like a mess. What appropriate arguments can be used to solve this problem? **EDIT:** The print screen of the mess output: [![enter image description here](https://i.stack.imgur.com/wpNhI.png)](https://i.stack.imgur.com/wpNhI.png)<issue_comment>username_1: What happens if you use write.table(table, "outfile.txt", sep="\t", row.names=FALSE)? That should help you create a tab-delimited text file. If the output still looks like a mess, you can export your file as a csv with write.csv(table, "outfile.txt", row.names=FALSE). Upvotes: 2 <issue_comment>username_1: Did you check the structure of your table with str(table) before you export? It looks like the table may contain some corrupt variable names and/or variable, which may un turn cause export problems. In an ideal case, when you do str(table), you should see that the table object is a data.frame (or tibble) with proper variable names and values. If you see variable names like """ or c(9,11,11, ...) etc., that's a signal that your problem is with how you create table, not how you export it. Upvotes: 2 [selected_answer]
2018/03/17
684
1,952
<issue_start>username_0: Okay so I want to move all digits from end of the line to the front of the line, example of lines: ``` example123 example321 example2920 ``` expected output: ``` 123example 321example 2920example ``` the following sed command works to place numbers at the start to end - ``` sed -E 's/^([0-9]+)(.*)/\2\1/' file ``` input of this is ``` 123example 321example ``` and output is ``` example123 example321 ``` but when trying to do the same for numbers at the end moved to the front I can't seem to do it.. I've tried changing ``` ^ ``` to ``` $ ``` and other things but I'm new to sed so I don't really understand alot.<issue_comment>username_1: The working example you give matches "a string of digits followed by anything". You might think that you could alter it to match "anything followed by some digits" (`/\(.*\)\([0-9]*\)`/), but the trouble is that sed matches as much as it can; in the working example it grabs as many digits as it can, and in the naive new version it grabs everything in the first term. The trick is to match *non-digits followed by anything*: ``` sed 's/\([^0-9]*\)\(.*\)/\2\1/' filename ``` **EDIT:** I hadn't thought of the possibility of *non*-trailing digits. This: ``` sed 's/\(.*[^0-9]\)\(.*\)/\2\1/' filename ``` will turn ``` Ex1am2ple123 ``` to ``` 123Ex1am2ple ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Using GNU awk, you could match the trailing digits, and then print that, and a substring of the input, chopping off at the end the number of characters that were matched: ``` awk 'match($0, /[0-9]+$/, m) {print m[0] substr($0, 0, length($0) - length(m[0]))}' file ``` This solution works even if there are digits in the prefix, for example: ``` $ cat file ex456le123 456mple321 example2920 $ gawk 'match($0, /[0-9]+$/, m) {print m[0] substr($0, 0, length($0) - length(m[0]))}' file 123ex456le 321456mple 2920example ``` Upvotes: 1
2018/03/17
839
3,051
<issue_start>username_0: I am trying to subscribe from a service, it builds without error but I get the error "this.myService.getAll is not a function" when viewing in the browser. Service: ``` import { Injectable } from '@angular/core'; import { Http } from "@angular/http"; @Injectable() export class MyServiceService { url:string; constructor(private http:Http) { this.url = "http://127.0.0.1:8000/scrumboard"; } public getAll(): any { let result = this.http.get(this.url+"/lists"); return result; } } ``` Component: ``` import { Component, OnInit } from '@angular/core'; import { MyServiceService } from '../my-service.service'; import { Response } from '@angular/http'; @Component({ selector: 'app-dashboard', templateUrl: './full-layout.component.html' }) export class FullLayoutComponent { public disabled: boolean = false; public status: {isopen: boolean} = {isopen: false}; public toggled(open: boolean): void { console.log('Dropdown is now: ', open); } public toggleDropdown($event: MouseEvent): void { $event.preventDefault(); $event.stopPropagation(); this.status.isopen = !this.status.isopen; } constructor(public myService:MyServiceService) { } public subscribe() { this.myService.getAll().subscribe( (data: Response) => console.log(data) ); } } ``` Finally the app.module.ts: ``` providers: [{ provide: MyServiceService, useClass: HashLocationStrategy, }], ``` Any ideas what I am doing wrong?<issue_comment>username_1: It is bad decision to name method like subscribe() because it could confused you and another developer, it is better to name it like: ``` public subscribetoGetAll(...; ``` Your solution is to call this mehod in a constructor or ngOnInit (it is better to do in ngOnInit): ``` export class FullLayoutComponent, OnInit { public disabled: boolean = false; public status: {isopen: boolean} = {isopen: false}; public toggled(open: boolean): void { console.log('Dropdown is now: ', open); } public toggleDropdown($event: MouseEvent): void { $event.preventDefault(); $event.stopPropagation(); this.status.isopen = !this.status.isopen; } constructor(private myService:MyServiceService) { } ngOnInit() { // ngOnInit added this.subscribetoGetAll(); // <-- here you calling your method } public subscribetoGetAll() { this.myService.getAll().subscribe( (data: Response) => console.log(data) ); } } ``` Upvotes: 1 <issue_comment>username_2: You have to declare your provider as: ``` providers: [ MyServiceService ] ``` Because you are using `HashLocationStrategy` class in place of your service class, and `HashLocationStrategy` don't have `getAll` function. Another thing is to inject your service as private: ``` constructor(private myService:MyServiceService) ``` And call that `subscribe` function you added from [ngOnInit](https://angular.io/api/core/OnInit) ``` ngOnInit() { this.subscribe(); } ``` Upvotes: 3 [selected_answer]
2018/03/17
480
1,705
<issue_start>username_0: I would like to now what are (in theory) the differences between using ``` if (a == x && b == y) ``` and ``` if (a == x){ if (b == y){ } } ``` Edit : My question is not about the result but more about the performances. I find out that sort a table then search something into it is more efficient that do a research directly. I would like to know if there is a same thing with this, or if the compiler just replace the 'and' by another if Thank you<issue_comment>username_1: When you use && logical operator if the first expresion it's false the second expresion will not be evaluated. So in theory both examples are the same. Upvotes: 0 <issue_comment>username_2: **In first case:** ``` if (a == x && b == y) { // code block } ``` the code block never execute unless the two condition a == x && b == y are true **but in second case:** ``` if (a == x) { // code block 1 if (b == y) { // code block 2 } } ``` **the code block 1** will execute when only the first condition ***a == x*** is true but **the code block 2** will execute when two condition ***a == x && b == y*** are true Upvotes: -1 <issue_comment>username_3: If(a == x && b== y) 1)In this condition both the value should be true means they have to satisfy the condition, then and only then the if condition is gonna satisfy. On the other hand : If(a == x){ ``` If( b == y){ } } ``` * In this case if the first condition pass means (a == x)satisfy then we enter into if condition and then we check for second condition (b == y). Means it works in sequence. You can also perform some functionality after satisfying the first condition. Upvotes: -1
2018/03/17
539
1,977
<issue_start>username_0: I am trying to write a simple array, I have written many arrays but I don't understand why on this occasion I am not getting the desired result. Below is the code I am using, so I am generating a report name and putting them in an array to retrieve later on by referencing the index number, So the debug within the loop works and I can see theem being stored, when I am outside the loop cannot retrieve it so cannot recall. Any ideas? ``` For i = 1 To num reportname = API & SetPeriod & APIEnd ReDim retrieve(1 To i) retrieve(i) = reportname SetPeriod = SetPeriod + 1 Debug.Print retrieve(i) Next i ReDim retrieve(1 To 4) Debug.Print retrieve(2) Debug.Print retrieve(2) ```<issue_comment>username_1: When you use && logical operator if the first expresion it's false the second expresion will not be evaluated. So in theory both examples are the same. Upvotes: 0 <issue_comment>username_2: **In first case:** ``` if (a == x && b == y) { // code block } ``` the code block never execute unless the two condition a == x && b == y are true **but in second case:** ``` if (a == x) { // code block 1 if (b == y) { // code block 2 } } ``` **the code block 1** will execute when only the first condition ***a == x*** is true but **the code block 2** will execute when two condition ***a == x && b == y*** are true Upvotes: -1 <issue_comment>username_3: If(a == x && b== y) 1)In this condition both the value should be true means they have to satisfy the condition, then and only then the if condition is gonna satisfy. On the other hand : If(a == x){ ``` If( b == y){ } } ``` * In this case if the first condition pass means (a == x)satisfy then we enter into if condition and then we check for second condition (b == y). Means it works in sequence. You can also perform some functionality after satisfying the first condition. Upvotes: -1
2018/03/17
440
1,586
<issue_start>username_0: I want to achieve something you can see in the Android BlueTooth settings: two listviews and one activity. First listview contains already paired devices, second - newly discovered (this one doesn't have fixed size). How can I develop activity like this? Putting listviews in scrollview isn't very good idea as far as I know. So, what would you recommend?<issue_comment>username_1: Why don't you just put the two ListViews in LinearLayout ``` ``` Upvotes: 0 <issue_comment>username_1: I think you do not need two ListViews at all. You need to implement one ListView with headers as described [here](https://stackoverflow.com/questions/13590627/android-listview-headers). ``` public class MainActivity extends ListActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); List items = new ArrayList(); items.add(new Header("Header 1")); items.add(new ListItem("Text 1", "Rabble rabble")); items.add(new ListItem("Text 2", "Rabble rabble")); items.add(new ListItem("Text 3", "Rabble rabble")); items.add(new ListItem("Text 4", "Rabble rabble")); items.add(new Header("Header 2")); items.add(new ListItem("Text 5", "Rabble rabble")); items.add(new ListItem("Text 6", "Rabble rabble")); items.add(new ListItem("Text 7", "Rabble rabble")); items.add(new ListItem("Text 8", "Rabble rabble")); TwoTextArrayAdapter adapter = new TwoTextArrayAdapter(this, items); setListAdapter(adapter); } } ``` Upvotes: 2 [selected_answer]
2018/03/17
942
2,632
<issue_start>username_0: I need help parsing this XML message into a JSON structure. I have tried using Jackson library using `XmlMapper`. It keeps giving a parsing error. Do I need to parse out extra characters? ``` xml version="1.0" encoding="utf-8" standalone="yes"? sim\_default\_CouponType sim\_default\_CouponNumber 0 0 sim\_default\_TenderCode 0 0 0 1 0 0 sim\_default\_UPC 0 0 0 false 0 false 1970-01-01T00:00:00 NEW false false sim\_default\_ImmunType false sim\_default\_UPCDescription 0 0 0 ``` I have tried this: ``` XmlMapper xmlMapper = new XmlMapper(); JsonNode node = xmlMapper.readTree(xml.getBytes()); ObjectMapper jsonMapper = new ObjectMapper(); String json = jsonMapper.writeValueAsString(node); System.out.println(json); ``` But keep getting error ``` Misshaped close tag at 2299 [character 91 line 52] ```<issue_comment>username_1: The following works for me: ``` public static void main(String[] args) throws IOException { String xml = new String(Files.readAllBytes(Paths.get(".... /xmlfile.xml"))); XmlMapper xmlMapper = new XmlMapper(); JsonNode node = xmlMapper.readTree(xml.getBytes()); ObjectMapper jsonMapper = new ObjectMapper(); String json = jsonMapper.writeValueAsString(node); System.out.println(json); } ``` Some things to try if it's still failing: * Try removing the processing instruction `xml version="1.0" encoding="utf-8" standalone="yes"?`. Your code above seems to have leading spaces before this. Is that correct? * Check that the source XML passes validation. * Check for trailing characters in your source XML. Upvotes: 0 <issue_comment>username_2: You can use JSONObject library to convert your xml to JSON. ``` String xml = "xml version=\"1.0\" encoding=\"utf-8\" standalone=\"yes\"?\n" + "\n" + " \n" + " \n" + " \n" + " sim\_default\_CouponType\n" + " sim\_default\_CouponNumber\n" + " 0\n" + " 0\n" + " \n" + " \n" + " \n" + " \n" + " sim\_default\_TenderCode\n" + " 0\n" + " 0\n" + " \n" + " \n" + " \n" + " \n" + " 0\n" + " 1\n" + " 0\n" + " 0\n" + " sim\_default\_UPC\n" + " 0\n" + " 0\n" + " 0\n" + " \n" + " \n" + " false\n" + " \n" + " 0\n" + " false\n" + " 1970-01-01T00:00:00\n" + " NEW\n" + " false\n" + " false\n" + " sim\_default\_ImmunType\n" + " false\n" + " \n" + " sim\_default\_UPCDescription\n" + " \n" + " \n" + " \n" + " 0\n" + " 0\n" + " 0\n" + " \n" + " \n" + ""; System.out.println(xml); JSONObject obj = XML.toJSONObject(xml); System.out.println(obj); ``` Upvotes: 1
2018/03/17
532
1,751
<issue_start>username_0: I am trying to do a search on my database where it returns results based on exact surnames and first names with the initial of the name entered in an input box. For instance, if the user enters `<NAME>` it will return all users with the initial `S` for the first name and `Burns` for the Surname. Here is my code. Sorry if this has been covered before but I cannot find a simple answer that works. ``` $firstname = filter_input(INPUT_POST, 'firstname', FILTER_SANITIZE_STRING); $surname = filter_input(INPUT_POST, 'surname', FILTER_SANITIZE_STRING); $prep_query = "SELECT id, firstname, surname, profilepic, Gender, StartYear, EndYear, CircaStart, CircaEnd, JnrHouse, SnrHouse FROM people WHERE firstname LIKE ? AND surname = ?"; $namecheckresult = $connection->prepare($prep_query); // Return all matches wether registered on not if ($namecheckresult) { $initial = $firstname[0]; $namecheckresult->bind_param( 'ss', $initial . '%', $surname); $namecheckresult->execute(); $namecheckresult->store_result(); $namecheckresult->bind_result($id, $first, $last, $ProfilePic, $Gender, $StartYear, $EndYear, $CircaStart, $CircaEnd, $JnrHouse, $SnrHouse); ```<issue_comment>username_1: You have mixed up your 2 parameters Change it to ``` $p1 = '%'.$initial; $namecheckresult->bind_param( 'ss', $p1 , $surname); ``` Or you may want to put the `%` on the front and back of `$initial` like this ``` $p1 = "%$initial%"; $namecheckresult->bind_param( 'ss', $p1 , $surname); ``` Upvotes: -1 <issue_comment>username_2: I have now worked out the answer. changing $initial = $firstname[0]; to $initial = "%$firstname[0]%"; works. Hope this helps others. Upvotes: -1
2018/03/17
735
2,428
<issue_start>username_0: I have an App which retrieves a random statement from a postgres table when a button is clicked: ``` getRandomQuote() { axios.get('http://localhost:3100/quote').then(response => { console.log('1111111', response); this.setState( { quote: response.data[0].quote_text, quoteAuthor: response.data[0].author, opacityZero: false } ); setTimeout(_=>this.setState( {opacityZero: true} ),100) }) } ``` However, when I click the button while running the app, I receive the result: ``` GET http://localhost:3100/quote net::ERR_CONNECTION_REFUSED ``` along with ``` Uncaught (in promise) Error: Network Error at createError (createError.js:16) at XMLHttpRequest.handleError (xhr.js:87) ``` Attempting to go to <http://localhost:3100/quote> in the chrome browser delivers me to "This site can’t be reached localhost refused to connect. ERR\_CONNECTION\_REFUSED". For the time being I don't seem to have a reason to believe that this issue is the result of my express server or database as far as I know, since the primary issue seems to be its inability to connect to a localhost endpoint. server.js ``` var express = require('express'); var app = express(); var http = require('http'); var massive = require('massive'); const cors = require('cors'); var connectionString = 'postgres://myname@localhost/quotedatabase'; console.log('getting here'); app.use(cors()); massive(connectionString).then(db => { app.set('db', db); http.createServer(app).listen(3100); console.log('getting here two'); }) var db = app.get('db'); app.get('/', function(req, res){ res.send('a little short yeah? ask for a quote'); }); app.get('/quote', function(req, res) { var db = app.get('db'); db.getRandomQuote().then(randomQuote => { console.log(randomQuote); res.send(randomQuote); }).catch(console.log) }) ```<issue_comment>username_1: You have mixed up your 2 parameters Change it to ``` $p1 = '%'.$initial; $namecheckresult->bind_param( 'ss', $p1 , $surname); ``` Or you may want to put the `%` on the front and back of `$initial` like this ``` $p1 = "%$initial%"; $namecheckresult->bind_param( 'ss', $p1 , $surname); ``` Upvotes: -1 <issue_comment>username_2: I have now worked out the answer. changing $initial = $firstname[0]; to $initial = "%$firstname[0]%"; works. Hope this helps others. Upvotes: -1
2018/03/17
1,204
4,495
<issue_start>username_0: I have been using Github as an individual for some time. Now I need to create an organization and start a repo in the organization. So, logged in to Github as myself, I created the new organization. I then created a repo. Viewing the repo I can see that I am a contributor to the repo. The repo is, and needs to be, private. When I try to clone: ``` $ git clone https://github.com/my-organization/my-repo.git my-repo-folder Cloning into 'my-repo-folder'... remote: Repository not found. fatal: repository 'https://github.com/my-organization/my-repo.git/' not found ``` Locally git is remembering my credentials, so for example, if I have a repository in my personal Github repos, which is private, then `git clone https://github.com/Webern/my-personal-private-repo.git` works without requesting that I re-enter my Github username and password. What is going on with the organization? How do I clone my newly-formed organizations private repo?<issue_comment>username_1: My suspects * you probably have and extra layer of security in your Private repo, confirm that you either don't have a two-way authentication or something else. That will make hit flag you and error or wrong login credentials anytime you try to push. But in your case, you were trying to clone, so ensure that you're using the same credentials(email most especially) that you're using in the organization repo and your personal repo, it might possible be a permission thing. Git might assume the credentials you entered doesn't have the right to access the repo. Upvotes: 0 <issue_comment>username_2: Ultimately I believe the problem had something to do with the caching of credentials in my Mac's Keychain. The complete list of steps taken to fix the problem is below, but I believe the crucial step was to delete all Github credentials using Mac OS's Keychain Access utility. Adding the username and password to the https URL does work, but seems like a bad idea for permanent use. ``` MyMac:~ mjb$ git clone https://MyGithubUsername:MyGithubPassword@github.com/my-organization/my-repo.git MyMac:~ mjb$ cd my-repo MyMac:my-repo mjb$ git remote set-url origin https://github.com/my-organization/my-repo.git ``` After changing the URL back so that it does not contain my username and password, I still could not connect to the remote. ``` MyMac:my-repo mjb$ git push origin --all remote: Repository not found. ``` Now I did a bunch of flailing with git config, which I suppose was unnecessary. ``` MyMac:my-repo mjb$ git config --global user.name MyGithubUsername MyMac:my-repo mjb$ git --version git version 2.15.0 MyMac:my-repo mjb$ git config --global credential.helper manager MyMac:my-repo mjb$ git push origin --all remote: Repository not found. fatal: repository 'https://github.com/my-organization/my-repo.git/' not found MyMac:my-repo mjb$ git config --global --unset credential.helper MyMac:my-repo mjb$ git push origin --all remote: Repository not found. fatal: repository 'https://github.com/my-organization/my-repo.git/' not found ``` **Now I went on my Mac to Utilities -> Keychain Access I deleted all credentials relating to Github** What is strange about this is that they were correct, and when I entered my username and password again I was entering the same username and password. ``` MyMac:my-repo mjb$ git push origin --all Username for 'https://github.com': MyGithubUsername Password for 'https://MyGithubUsername@github.com': <PASSWORD> Counting objects: 3, done. Delta compression using up to 8 threads. Compressing objects: 100% (3/3), done. Writing objects: 100% (3/3), 651 bytes | 651.00 KiB/s, done. Total 3 (delta 0), reused 0 (delta 0) ``` Success. Upvotes: 5 [selected_answer]<issue_comment>username_3: In case you have access tokens enabled for your GitHub account, make sure to tick the entire `repo` setting under *Developer settings* -> *Personal access tokens* as shown [here](https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line) (Step 7). Enabling all suboptions in the `repo` category is not enough. That did the trick for me. No need to reset your credentials anywhere. I had only the `public_repo` option enabled and that was sufficient to clone and push to my public repos but not to private repos in the organization where I even had an admin role. The error is quite cryptic and misleading, especially when you are sure to have all the access rights. Upvotes: 2
2018/03/17
478
1,464
<issue_start>username_0: I'm trying to use the inverse CDF of the Gumbel Dist. to simulate random numbers. However for the inverse I get `mu-x*log(-log(beta))` which spits out imaginary numbers which can't be write. The original CDF is `e^-e^(-(x-mu)/beta)`. And my code is: ``` n=1000 #sample size set.seed(1) #Makes the outcomes reproducible x = runif(n) # simulate n uniform pseudo-random numbers fx = 0-x*log(-log(10)) #Runs the pseudo-random numbers through the inverse CDF ``` If anyone can tell where I'm going wrong that would be very helpful, thanks.<issue_comment>username_1: You're simplifying incorrectly. `0-x*log(-log(10))` will always be undefined, no matter what `x`is. This is because you are taking the log of a negative number. A log with an argument greater than 1 will \*always be positive (e.g., `log(10)`), making `-log(10)` negative. Pull out the negative signs in the exponents for your CDF first, before taking the log of both sides. \*provided the base is greater than 1 Upvotes: 1 <issue_comment>username_2: You transposed the x and beta in the quantile function: ``` q <- mu-beta*log(-log(x))) x = runif(100) qgum <- function(x, mu=0, beta=1){ stopifnot(beta > 0); mu - beta*(log(-log(x))) } ``` That does return real (albeit some negative) values from the runif `argument` but this agrees with the calculation from `actuar::qgumbel`: ``` all.equal( qgumbel(x, 0, 10), qgum(x, 0, 10) ) [1] TRUE ``` Upvotes: 0
2018/03/17
757
2,897
<issue_start>username_0: I have the following piece of code, which receives a history object as prop: ``` const ChildComponent = ({ history }) => ( history.push(routes[4].path)}> Click me ); ``` How do I add typecheck for this history prop, **which is received by wrapping it's parent with withRouter HOC**? One way I could think of is to write something like this: ``` interface Props { history: { push(url: string): void; }; } ``` But I'm sure this is not the right way, because rest of the properties of the history object are being lost. Can you suggest the right way of doing this? **UPDATED the code based on @Oblosys's answer** ``` import { withRouter, RouteComponentProps } from "react-router-dom"; interface Props extends RouteComponentProps { /\* Parent component's props\*/ } class Parent extends React.Component { render() { return ; } } //Child component related stuff interface ChildComponentProps extends RouteComponentProps {} const ChildComponent: React.SFC = (props) => ( history.push(routes[4].path)}> Click me ); function mapStateToProps(state: types.AppState) { /\* related code \*/ } function mapDispatchToProps(dispatch: Redux.Dispatch{ /\* related code \*/ } export default withRouter(connect(mapStateToProps, mapDispatchToProps)(Parent)); ``` But, now I'm getting the following error: ``` Type '{ history: History; }' is not assignable to type 'ChildComponentProps'. Property 'match' is missing in type '{ history: History; }' ```<issue_comment>username_1: You can use the `RouteComponentProps` interface, which declares all props passed by `withRouter`: ``` import { RouteComponentProps } from 'react-router-dom'; .. interface ChildComponentProps extends RouteComponentProps { /\* other props for ChildComponent \*/ } const ChildComponent : React.SFC = ({ history }) => ( .. ); ``` The type parameter to `RouteComponentProps` is the type of the `params` property in `match`, so you won't need it unless you're matching named path segments. Alternatively, if `history` doesn't come from `withRouter` but is passed by itself as a prop, you can import the type from `history`: ``` import { History } from 'history'; .. interface ChildComponentProps { history : History /* other props for ChildComponent */ } const ChildComponent : React.SFC = ({ history }) => ( .. ); ``` Upvotes: 8 [selected_answer]<issue_comment>username_2: For React 16.8 with hooks: ``` ... import {withRouter, RouteComponentProps} from 'react-router-dom'; ... const ChildComponent: React.FunctionComponent = ({history}) => { ... } ``` Upvotes: 4 <issue_comment>username_3: The simplest solution I found ``` import { RouteComponentProps } from 'react-router-dom'; .... interface Foo{ history: RouteComponentProps["history"]; location: RouteComponentProps['location']; match: RouteComponentProps['match']; } ``` Upvotes: 5
2018/03/17
957
3,771
<issue_start>username_0: I've got an ArrayList of Object and I am trying to edit the field of one of those object, but when I do it I am receiving ConcurrentModificationException . I've read that I need to use Iterator, but even with this I am still receiving the error message. My code: ``` if(summedEmotionValuesArrayList.size() == 0){ summedEmotionValuesArrayList.add(newEmotionValuesDataset); }else { for (Iterator currentEmotionValuesDataset = summedEmotionValuesArrayList.iterator(); currentEmotionValuesDataset.hasNext(); ) { EmotionValuesDataset emotionValuesDataset = currentEmotionValuesDataset.next(); if (emotionValuesDataset.getEmotionName().equals(newEmotionValuesDataset.getEmotionName())) { double newValue = emotionValuesDataset.getEmotionValue() + newEmotionValuesDataset.getEmotionValue(); emotionValuesDataset.setEmotion\_value(newValue); } else { summedEmotionValuesArrayList.add(newEmotionValuesDataset); } } } ``` This code is executed multiple times within a for loop. The error message is occurring on this line > > EmotionValuesDataset emotionValuesDataset = currentEmotionValuesDataset.next(); > > > Any help is much appreciated.<issue_comment>username_1: ``` summedEmotionValuesArrayList.add(newEmotionValuesDataset); ``` this line making problems, You should create some other lists (name it yourTempList) and add elements there, after exit for loop just use ``` summedEmotionValuesArrayList.addAll(yourTempList) ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: While iterating over an ArrayList, the iterator is always checking the modCount to the value it saved on the start of the iteration. This value changes with every add, remove, set, ... and if it changes during an iteration a `ConcurrentModificationException` is thrown. If you want to remove the item you are currently on you can use the Iterator#remove method, which knows to updated the expectedModCount. As you are adding values to the ArrayList, you can save them in a temporary list and then, when the iteration is over, add all of those elements to the original list. ``` for(...) { ... temp.add(newValue); ... } summedEmotionValuesArrayList.addAll(temp); ``` Upvotes: 1 <issue_comment>username_3: You are trying to basically add the values to the arraylist `summedEmotionValuesArrayList`. If you look into the way `add` is implemented, ``` public void add(E e) { checkForComodification(); try { int i = cursor; ArrayList.this.add(i, e); cursor = i + 1; lastRet = -1; expectedModCount = modCount; } catch (IndexOutOfBoundsException ex) { throw new ConcurrentModificationException(); } } ``` And, if you look at the implementation of `checkForComodification()`, you can see that it checks `modCount`. So, based on that, and documentation of <https://docs.oracle.com/javase/9/docs/api/java/util/AbstractList.html>, you can see this following. > > If the value of this field changes unexpectedly, the iterator (or list > iterator) will throw a ConcurrentModificationException in response to > the next, remove, previous, set or add operations. This provides > fail-fast behavior, rather than non-deterministic behavior in the face > of concurrent modification during iteration. > > > Therefore, the way you can solve this is by,creating a new temp list. This has been already answered. Anyway, ``` ArrayList tempList = new ArrayList(); for(...){ //use your logic, but instead of summedEmotionValuesArrayList.add(newEmotionValuesDataset); tempList.add(newEmotionValuesDataset); } //outside of forloop summedEmotionValuesArrayList.addAll(tempList) ``` I hope it helps! Upvotes: 2
2018/03/17
1,011
3,527
<issue_start>username_0: Say i have 974 lines of data. What i am trying to do is to create a new np.array from a dataframe in the following format: Add first 10 lines to an np.array until the end of dataframe only if the length is 10. Here how i have solved this: ``` clen = len(df) X = [] for i in range(clen): if len(df[i:i+10]) == 10: X.append(np.array(df[i:i+10]).astype(float)) ``` I believe there is a better way to do that but don't want to do something wrong and couldn't find it. Also what i want to do is to add every 11th line to another np.array(). That i couldn't find yet. So i have 974 rows. Beginning from first row, every 10 rows will be added to X array and the next one will be added to another array. Btw, X will be something like this: ``` [[0,1,2,3,4,5,6,7,8,9], [1,2,3,4,5,6,7,8,9,10], [2,3,4,5,6,7,8,9,10,11] ....] ``` and y will be ``` [10,11,12....] ``` I hope i could explain what i need<issue_comment>username_1: ``` summedEmotionValuesArrayList.add(newEmotionValuesDataset); ``` this line making problems, You should create some other lists (name it yourTempList) and add elements there, after exit for loop just use ``` summedEmotionValuesArrayList.addAll(yourTempList) ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: While iterating over an ArrayList, the iterator is always checking the modCount to the value it saved on the start of the iteration. This value changes with every add, remove, set, ... and if it changes during an iteration a `ConcurrentModificationException` is thrown. If you want to remove the item you are currently on you can use the Iterator#remove method, which knows to updated the expectedModCount. As you are adding values to the ArrayList, you can save them in a temporary list and then, when the iteration is over, add all of those elements to the original list. ``` for(...) { ... temp.add(newValue); ... } summedEmotionValuesArrayList.addAll(temp); ``` Upvotes: 1 <issue_comment>username_3: You are trying to basically add the values to the arraylist `summedEmotionValuesArrayList`. If you look into the way `add` is implemented, ``` public void add(E e) { checkForComodification(); try { int i = cursor; ArrayList.this.add(i, e); cursor = i + 1; lastRet = -1; expectedModCount = modCount; } catch (IndexOutOfBoundsException ex) { throw new ConcurrentModificationException(); } } ``` And, if you look at the implementation of `checkForComodification()`, you can see that it checks `modCount`. So, based on that, and documentation of <https://docs.oracle.com/javase/9/docs/api/java/util/AbstractList.html>, you can see this following. > > If the value of this field changes unexpectedly, the iterator (or list > iterator) will throw a ConcurrentModificationException in response to > the next, remove, previous, set or add operations. This provides > fail-fast behavior, rather than non-deterministic behavior in the face > of concurrent modification during iteration. > > > Therefore, the way you can solve this is by,creating a new temp list. This has been already answered. Anyway, ``` ArrayList tempList = new ArrayList(); for(...){ //use your logic, but instead of summedEmotionValuesArrayList.add(newEmotionValuesDataset); tempList.add(newEmotionValuesDataset); } //outside of forloop summedEmotionValuesArrayList.addAll(tempList) ``` I hope it helps! Upvotes: 2
2018/03/17
789
2,000
<issue_start>username_0: I have an object like this for example: ``` var obj = { 107: {name: "test", id: 772}, 124: {name: "hello", id: 123}, 120: {id: 213} } ``` How can I retrieve an array of `name`s from `obj`, for example the output should be: `["test", "hello"]` How would I go about doing this?<issue_comment>username_1: You could use `Object.keys()` to get an array of keys for the obj object, then reduce though those: ```js const obj = { 107: {name: "test", id: 772}, 124: {name: "hello", id: 123}, 120: {id: 213} } console.log( Object.keys(obj).reduce((a,key) => { obj[key].name && a.push(obj[key].name) return a },[]) ) ``` Upvotes: 1 <issue_comment>username_2: You can use the functions `reduce`, `Object.values` and the function `includes` to avoid repeated values. ```js var obj = { '107': {name: "test", id: 772}, '124': {name: "hello", id: 123}, '125': {name: "hello", id: 1233}, '120': {id: 213} } var result = Object.values(obj).reduce((a, {name}) => { if (name && !a.includes(name)) a.push(name); return a; }, []); console.log(result); ``` ```css .as-console-wrapper { max-height: 100% !important; top: 0; } ``` To avoid repeated values, you can use the Object `Set` as well: ```js var obj = { '107': {name: "test", id: 772}, '124': {name: "hello", id: 123}, '125': {name: "hello", id: 1233}, '120': {id: 213} } var result = Array.from(Object.values(obj).reduce((a, {name}) => { if (name) a.add(name); return a; }, new Set()).values()); console.log(result); ``` ```css .as-console-wrapper { max-height: 100% !important; top: 0; } ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: Since the object is always in this form you can do a one-line filter/map: ```js var obj = { 107: {name: "test", id: 772},124: {name: "hello", id: 123},120: {id: 213}} var names = Object.values(obj).filter(v => v.name).map(i => i.name) console.log(names) ``` Upvotes: 2
2018/03/17
515
1,895
<issue_start>username_0: I have a Laravel query that looks like this ``` $users = DB::table("users") ->select('id') ->where('accounttype', 'standard') ->get()->all(); ``` It is working and returning the id for all standard accounts, I am trying to limit it to only return results from the last 30 days, the date the user was created is stored as a timestamp in 'created\_at' Is it best to do this in the query or should I process the results afterwards?<issue_comment>username_1: You can use carbon along with the `where` clause: ```php use Carbon\Carbon; $users = DB::table("users") ->select('id') ->where('accounttype', 'standard') ->where('created_at', '>', now()->subDays(30)->endOfDay()) ->all(); ``` As noted in the comments, do as much in the query as possible until you notice performance issues or your queries become unreadable. Upvotes: 6 [selected_answer]<issue_comment>username_2: You can use it ``` \App\Model::whereBetween('date', [now()->subdays(30), now()->subday()])->get(); ``` Upvotes: 2 <issue_comment>username_3: **You can use it** ``` php namespace App\Http\Controllers; use App\User; use Illuminate\Http\Request; use Carbon\Carbon; class tblUserController extends Controller { public function getSixtyMonthUser(){ $date = Carbon::today()-subDays(60); $users = User::where('created_at','>=',$date) ->where('accounttype', 'standard') ->select('id') ->get(); return response()->json($users,200); } } ``` Upvotes: -1 <issue_comment>username_4: ```php $thirty_days_ago = date('Y-m-d', strtotime("-31 days")); $users_thirty_days_ago = User::where('accounttype', 'standard') ->whereDate('last_online', ">=" , $thirty_days_ago) ->count(); ``` Upvotes: 2
2018/03/17
713
2,369
<issue_start>username_0: What I'm dealing with is a nested for loop that finds perfect numbers. The iterations to find the perfect number came about pretty quickly. However, I need the output to show a string that says, "perfect number found" and have that happen once. Then on the same line, print out all the perfect numbers. So for example, lets say I have two inputs, 1 to 30. That means that the perfect numbers are 6 and 28. The output should be something like. Perfect numbers: 6 28. What I'm confused on is how to print out just that string once in the for loop I have created. Here's what I have so far. ``` #include int main() { int a, b; std::cin >> a >> b; int count = 0; for (int i = a; i <= b; i++) { int sum = 0; for (int j = 1; j <= i; j++) { if (i % j == 0) { sum += j; } } if (i \* 2 == sum) { std::cout << i << " "; count++; } } std::cout << "\n"; std::cout << "number of perfect numbers found: " << count << std::endl; return 0; } ```<issue_comment>username_1: You can use carbon along with the `where` clause: ```php use Carbon\Carbon; $users = DB::table("users") ->select('id') ->where('accounttype', 'standard') ->where('created_at', '>', now()->subDays(30)->endOfDay()) ->all(); ``` As noted in the comments, do as much in the query as possible until you notice performance issues or your queries become unreadable. Upvotes: 6 [selected_answer]<issue_comment>username_2: You can use it ``` \App\Model::whereBetween('date', [now()->subdays(30), now()->subday()])->get(); ``` Upvotes: 2 <issue_comment>username_3: **You can use it** ``` php namespace App\Http\Controllers; use App\User; use Illuminate\Http\Request; use Carbon\Carbon; class tblUserController extends Controller { public function getSixtyMonthUser(){ $date = Carbon::today()-subDays(60); $users = User::where('created_at','>=',$date) ->where('accounttype', 'standard') ->select('id') ->get(); return response()->json($users,200); } } ``` Upvotes: -1 <issue_comment>username_4: ```php $thirty_days_ago = date('Y-m-d', strtotime("-31 days")); $users_thirty_days_ago = User::where('accounttype', 'standard') ->whereDate('last_online', ">=" , $thirty_days_ago) ->count(); ``` Upvotes: 2
2018/03/17
2,399
8,269
<issue_start>username_0: I have a function `foo()` protected by a mutex `m` which is defined as a local static variable of `foo()`. I'd like to know whether it's safe to call `foo()` in the destructor of an object `bar` with static storage duration: ``` // foo.h void foo(); // foo.cpp #include "foo.h" #include void foo() { static std::mutex m; std::lock\_guard lock(m); // ... } // bar.h struct Bar { ~Bar(); }; extern Bar bar; // bar.cpp #include "bar.h" #include "foo.h" Bar::~Bar() { foo(); } Bar bar; // main.cpp int main() { Bar bar; return 0; } ``` If `std::mutex` is trivially destructible, this should be safe, because `bar` will be destructed before `m`. On GCC 5.4, Ubuntu 16.04, calling `std::is_trivially_destructible::value` returns `true`, so it seems okay at least in this compiler. Any definitive answer? Related: Google C++ Style Guide on [Static and Global Variables](https://google.github.io/styleguide/cppguide.html#Static_and_Global_Variables) --- **Edit** Apparently, I wasn't clear enough and should have provided more context. Yes, the underlying problem is that I want `bar` to be destructed before `m`. This is the "destruction" part of the well known "static initialization fiasco problem", see for example: <https://isocpp.org/wiki/faq/ctors#construct-on-first-use-v2> > > The point is simple: if there are any other static objects whose > destructors might use ans after ans is destructed, bang, you’re dead. > If the constructors of a, b and c use ans, you should normally be okay > since the runtime system will, during static deinitialization, > destruct ans after the last of those three objects is destructed. > However if a and/or b and/or c fail to use ans in their constructors > and/or if any code anywhere gets the address of ans and hands it to > some other static object, all bets are off and you have to be very, > very careful. > > > This is why Google recommends against using static objects, *unless they are trivially destructible*. The thing is, if the object is trivially destructible, then the order of destruction doesn't really matter. Even if `m` is "destructed" before `bar`, you can still in practice use `m` in the destructor of `bar` without crashing the program, because the destructor effectively does nothing (it doesn't deallocate any memory or release any other type of resources). And in fact, if `m` is trivially destructible, then the program may not even destruct `m` at all, which actually ensures that `m` is "destructed" after `bar` or any other static objets which are not trivially destructible. See for instance: <http://en.cppreference.com/w/cpp/language/lifetime#Storage_reuse> > > A program is not required to call the destructor of an object to end > its lifetime if the object is trivially-destructible or if the program > does not rely on the side effects of the destructor. > > > For these reasons, it is in practice overkill to use complex singleton idioms such as the [Nifty Counter idiom](https://en.wikibooks.org/wiki/More_C%2B%2B_Idioms/Nifty_Counter) if your singleton is trivially destructible. In other words, if `std::mutex` is trivially destructible, then my code sample above is safe: `m` is either destructed after `bar`, or it is "technically destructed" before `bar` but wouldn't cause a crash anyway. However, if `std::mutex` is not trivially destructible, then I may need to use the Nifty Counter idiom instead, or alternatively the simpler but "purposefully leaking" [Trusty Leaking idiom](https://isocpp.org/wiki/faq/ctors#static-init-order-on-first-use). Related: * <https://stackoverflow.com/a/335746/1951907> * <https://stackoverflow.com/a/17712497/1951907><issue_comment>username_1: On VC++ std::mutex is not trivially destructible, so the answer to your question is no. What (I think) you really want to know is how to be sure the destructor for `Bar bar` is called before the destructor for `foo::m`. Well you can't unless they are in the same translation unit. If you define them both in a file called foobar.cpp and define foo() above Bar bar, you're good. Upvotes: 2 <issue_comment>username_2: What the Standard Says ====================== The answer is "no": according to the [C++17 Standard](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2017/n4659.pdf), the type `std::mutex` is not required to have a trivial destructor. General requirements of mutex types are described in [thread.mutex.requirements], and the only paragraph describing destructibility is the following: > > The mutex types shall be DefaultConstructible and Destructible. If > initialization of an object of a mutex type fails, an exception of > type system\_error shall be thrown. The mutex types shall not be > copyable or movable. > > > Later, the section [thread.mutex.class] details `std::mutex` in particular but does not specify additional requirements apart from the following paragraph: > > The class mutex shall satisfy all of the mutex requirements (33.4.3). > It shall be a standard-layout class (Clause 12). > > > Though, note that among all mutex types, `std::mutex` is the only one with a `constexpr` constructor, which is often a hint that the type might be trivially destructible as well. What Compilers Say ================== (thanks @liliscent for creating the test) ``` #include #include #include using namespace std; int main() { std::cout << boolalpha << is\_trivially\_destructible::value << "\n"; } ``` * [Run with Clang 7.0.0](https://wandbox.org/permlink/nOpen9kQsXie3NUi): `false` * [Run with GCC 8.0.1](https://wandbox.org/permlink/6yh46R3ZSOxJWeNA): `true` * [Run with MSVC Version 19.00.23506](http://rextester.com/FURF93364): `false` In other words, it seems that currently, only GCC on Linux platforms provides a trivial destructor for `std::mutex`. However, note that there is a [Bug Request](https://bugs.llvm.org/show_bug.cgi?id=27658) to make `std::mutex` trivially destructible in Clang on some platforms: > > For these reasons I believe we should change 'std::mutex' to be > trivially destructible (when possible). This means *NOT* invoking > "pthread\_mutex\_destroy(...)" in the destructor. > > > I believe is a safe change on some pthread implementations. The main > purpose of "pthread\_mutex\_destroy" is to set the lock to an invalid > value, allowing use-after-free to be diagnosed. AFAIK mutex's > initialized with "PTHREAD\_MUTEX\_INITIALIZER" own no resources and so > omitting the call will not cause leaks. > > > On other pthread implementations this change will not be possible. > > > A follow-up message details that platforms where this change is possible seem to include NPTL (GLIBC) and Apple, while it doesn't seem possible on FreeBSD. Note that the bug request also mentions the problem I was referring to in my question (emphasis mine): > > A **trivial destructor is important** for similar reasons. If a mutex is > used during dynamic initialization it might also be used during > program termination. If a static mutex has a non-trivial destructor it > will be invoked during termination. **This can introduce the "static > deinitialization order fiasco"**. > > > What Should I Do? ================= If you need a global mutex in portable code (for example to protect another global object such as a memory pool, etc.), and are in a use case that may be subject to the ["static deinitialization order fiasco"](https://isocpp.org/wiki/faq/ctors#construct-on-first-use-v2), then you need to use careful singleton techniques to ensure that the mutex is not only created before first use, but also destructed after last use (or not destructed at all). The simplest approach is to purposefully "leak" a dynamically allocated local static mutex, like so, which is fast and most likely safe: ``` void foo() { static std::mutex* m = new std::mutex; std::lock_guard lock(\*m); // ... } ``` Otherwise, a cleaner approach is to use the [Nifty Counter idiom](https://en.wikibooks.org/wiki/More_C%2B%2B_Idioms/Nifty_Counter) (or "Schwarz Counter") to control the lifetime of the mutex, although be aware that this technique introduces a small overhead at startup and termination of the program. Upvotes: 4 [selected_answer]
2018/03/17
1,835
6,319
<issue_start>username_0: I am trying to create a database for my Neo4J Desktop, but everytime I attempt to do so, I get the following: Database failed to create: Error: Could not change password How can I fix this and successfully create the Database? I'm running Neo4J Desktop 1.0.18 and attempting to work with Neo4J 3.3.3.<issue_comment>username_1: I'm having exactly the same problem. Specs below- ``` Neo4j Desktop: 1.0.18 Neo4j: 3.3.4 Platform: Windows 10 Enterprise v 1703 ``` To be clear, this is occurring (for me) on the ***Windows*** platform. It would be helpful if other commenters could clarify which platform they are using. Since my original reply I have tried- * Upgrading to Neo4J 1.0.19 * Installing for all users, not just my account Neither of these has helped. Looking at the Neo4J log file, I think this shows where the problem is- ``` [2018-03-29 11:20:04:0542] [info] Executing 'C:\Users\\.Neo4jDesktop\neo4jDatabases\database-c2c04041-5a81-43fe-a004-56b1c47f4c1c\installation-3.3.4\bin\neo4j-admin.bat' [ 'set-initial-password', '<PASSWORD>' ] [2018-03-29 11:20:08:0511] [error] Neo4jAdmin: Invoke-Neo4jAdmin : An object at the specified path C:\Users\ does not exist. At line:1 char:163 + ... agement.psd1'; Exit (Invoke-Neo4jAdmin set-initial-password password) + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` There is a `neo4j-admin.bat` file at the path specified, but it looks as if Neo4J isn't finding it. The error message shows the path to the .bat file truncated at the "personal folder" point. In our corporate PC config we have a domain name appended to our personal file paths, so my personal folder looks like `.` . The error has converted this path to the old DOS "8.3" format in the form "MYNAM~1.COR" and then choked. I'm guessing at this point, but I would say Neo4J has not been tested on a Windows environment with "complex" names in the users file paths. Somewhere they are falling foul of Windows still-present "file name mangling" feature that munges long filenames down to 8.3 format. UPDATE: Out of interest I tried manually running the `neo4j-admin.bat` script (in a cmd window) that the error points to, results were- ``` neo4j-admin "set-initial-password" password Invoke-Neo4jAdmin : Unable to determine the path to java.exe At line:1 char:163 + ... agement.psd1'; Exit (Invoke-Neo4jAdmin set-initial-password password) + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : NotSpecified: (:) [Write-Error], WriteErrorException + FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException,Invoke-Neo4jAdmin ``` I think this indicates that the script is trying to run, but needs a bunch of environment set up, that would normally be handled by Neo4J itself. I feel more confident in saying this is an issue, at least for the Windows install, caused by Neo4J's configuration tools and scripts not handling Windows pathnames correctly. UPDATE2: I reported this as a bug in Neo4J (BugID #11429 in GITHub). The developers are saying it has been fixed, The fix will be included in a future release. Full text of the response- ``` This issue should have been resolved with #11469. The fixed script will be included in upcoming releases. For the time being, you can override your TEMP and TMP environment variables with a path that doesn't contain MSDOS8.3 convention path entries (as suggested by @chrisp429 in #9646). Feel free to re-open the issue if you encounter the error again with new versions. ``` Upvotes: 2 <issue_comment>username_2: I had a similar problem; here is what I did to solve it: 1. remove the file located at %NEO4J\_HOME%/data/dbms/auth or at least move it somewhere else. 2. restart Neo4J Desktop 3. If the problem isn't solved yet, put the file mentionned in 1. back at %NEO4J\_HOME%/data/dbms/auth **PS**: %NEO4J\_HOME% is the path to your Neo4J installation folder Upvotes: 2 <issue_comment>username_3: I had same issue with version Neo4J 1.0.21 and it got resolves when I uninstalled that version and they have Upgrade to Neo4J 1.0.22 so installed that will solved your issue. Let me know if that don't work for you.. Upvotes: 1 <issue_comment>username_4: For me the cause for this error was a `{` in my password. I had the same alert: *"Database failed to create: Error: Could not change password."* I read through the logs at .Neo4jDesktop/log.log and found this line: `Unexpected token '}' in expression or statement.` Once I removed that character from my password I was able to create a database. Upvotes: 2 <issue_comment>username_5: I had the same issue. After several trials, I deleted the folder assigned as "Data Path". And then start Neo4j Desktop. It works now. Hope this helps. Upvotes: 1 <issue_comment>username_6: I was in the same situation apparently. First check that your problem was the same as mine! 1 - Within the application `Neo4J Desktop` access the menu: Developer\Developer Tools [![enter image description here](https://i.stack.imgur.com/VcwQk.png)](https://i.stack.imgur.com/VcwQk.png) 2 - Select the `Console` tab 3 - Try again to create the Database 4 - Check if the error message appears: `Neo4jAdmin: Error: missing "server" JVM at "C:\Program Files (x86)\Java\jre1.8.0_201\bin\server\jvm.dll". Please install or use the JRE or JDK that contains these missing components.` [![enter image description here](https://i.stack.imgur.com/On5gn.png)](https://i.stack.imgur.com/On5gn.png) If this is happening with you, do the following: 1 - Go to the directory where the jre is installed, in my case it was in: `C:\Program Files (x86)\Java\jre1.8.0_201\bin` 2 - Within this directory create a folder called `server` 3 - Still in the bin directory, go to the `client` directory 4 - Copy all content from the `client` folder into the `sever` folder 5 - Restart `Neo4J Desktop` as administrator 6 - Try again to create the database Hope this helps! Upvotes: 3 <issue_comment>username_7: make sure you have a jdk installed! that's what fixed this issue for me. Upvotes: 0 <issue_comment>username_8: I had same issue on mac. At last fixed it by remove the whole config dictionary: `~/Library/Application Support/Neo4j Desktop` Upvotes: 0
2018/03/17
806
2,982
<issue_start>username_0: I have to make a basic bash script that keeps an eye on a specific file and if I decide to delete this file a warning should come up to confirm the delete. The script should accept one parameter ( the file that is going to be checked on delete) How can I achieve this? I don't understand how I can check if the users actually tried to delete the file when the script is running. Any tips would be really appreciated<issue_comment>username_1: **This is a really bad idea.** This mechanism is trivial to circumvent, either intentionally or accidentally (running `>file` instead of `will delete its contents just as much as `rm file` does -- moreso, rather, as `rm` will leave other hardlinks in place, but `>file` will clear the data associated with the inode, no matter how many links to it there may be). If doing this prevents you from keeping good backups, deploying automated snapshots, or leveraging appropriate file permissions, it's doing much more harm than good.` Note that the whitelist in question is shell-specific -- if you add a file to the whitelist in one shell instance, it won't necessarily be contained there for another. Also note that the implementation is a proof-of-concept; it doesn't catch recursive removes of a parent directory of a file flagged to be preserved. ``` # Put this in your ~/.bashrc declare -A whitelisted_files # If our default readlink isn't the GNU one, but we have GNU readlink installed # as "greadlink", use it. if command -v greadlink >/dev/null 2>&1; then readlink() { greadlink "$@"; } fi whitelist_file() { local arg for arg; do whitelisted_files["$(readlink -m -- "$arg")"]=1 done } rm() { local arg sigil_seen=0 confirm for arg; do if (( sigil_seen == 0 )); then [[ $arg = -- ]] && { sigil_seen=1; continue; } [[ $arg = -* ]] && continue fi if [[ ${whitelisted_files["$(readlink -m -- "$arg")"]} ]]; then read -p "File $arg is whitelisted; really remove?" confirm [[ $confirm = [Yy]* ]] || return 1 fi done command rm "$@" } ``` Thereafter: ``` touch preserve_me whitelist_file preserve_me rm preserve_me ``` ...will provide a prompt: ```none File preserve_me is whitelisted; really remove? ``` Upvotes: 1 <issue_comment>username_2: You can add the following function to your .bashrc: ``` rem() { read -a files <<< $@ for i in ${files[@]}; do if [[ $i == "1.txt" ]]; then rm -i $i else rm $i fi done } ``` Here replace `1.txt` with the filename you want confirm for removal. Then alias `rm` to this function: ``` alias rm='rem' ``` If the user runs the command `rm 1.txt 2.txt`, only `2.txt` will be deleted and the following prompt will be displayed: `rm: remove regular empty file '1.txt'?` **Remark:** As noted in the comments of this answer, the function `rem` does not properly handle file names with spaces, among other problems, so the answer of username_1 is the way to go. Upvotes: 0
2018/03/17
682
2,454
<issue_start>username_0: I have successfully wrapped the function "notMyFunction" in a promise so I can use "notMyFunction" as if it were a promise. Like so: ```js // I do not have access to "notMyFunction" function notMyFunction(a, cb) { if (a === 'latest') { cb('123'); } else { cb('error'); } } // changing callback into a promise function deploy(someVariable) { return new Promise((resolve, reject) => { notMyFunction(someVariable, resolve); }); } // using callback as if it were a promise deploy('latest').then((time) => { console.log(time) }, (err) => { console.log(err) }) ``` My question is: How do I do the same thing, when "notMyFunction" actually passes two arguments to the callback like so: ``` function notMyFunction(a, cb) { if (a === 'latest') { cb(null, '123'); } else { cb('error', null); } } function deploy(someVariable) { return new Promise((resolve, reject) => { notMyFunction(someVariable, resolve); }); } deploy('latest').then((time) => { // I need access to 123, not null console.log(time) }, (err) => { console.log(err) }) ```<issue_comment>username_1: Your promises are not handling error well. There will never be a `.catch()` since you are never calling `reject()`. If believe what you really want is something along the lines of: ``` function deploy(someVariable) { return new Promise((resolve, reject) => { notMyFunction(someVariable, (firstArg, ...otherArgs) => { // assuming that, on errors, your first argument is 'error' if (firstArg === 'error') { reject([firstArg, ...otherArgs]); } else { resolve([firstArg, ...otherArgs]); } }); }); } ``` Upvotes: 0 <issue_comment>username_2: I think you might want to look into the concept of "promisification". Newer versions of Node.js have a [util.promisify](https://nodejs.org/api/util.html) function that can handle this. Dr. Axel has a [great write-up on util.promisify](http://2ality.com/2017/05/util-promisify.html). If you're in the browser, you may want to consider pulling in a Promisify polyfill/shim like [es6-promisify](https://www.npmjs.com/package/es6-promisify). Having a consistent way to promisify functions across your codebase will help you to avoid a lot of potential problems. Upvotes: 1
2018/03/17
1,370
5,717
<issue_start>username_0: I had some issues with async calls, but after understanding event-driven functions in Node a bit better, I have the following code. It basically does 2 SQL queries. The first checks to see if an existing phone number is in the table. If there is, nothing happens. If there isn't, then it runs a second query to add a new value into the table. Currently it's event-driven, but .. how do I re-design it so that the outer function query is a separate function, and can even be used as a validation check for code elsewhere? (basically turning it into a generic helper function)? ``` router.post('/', function(req, res){ var mysql = req.app.get('mysql'); //values for 1st query - initial check var customerCheck = "SELECT phone_number FROM customer WHERE phone_number=?"; var phoneNumberCheck = [req.body.phone_number]; //values for after 1st query validation passes var sql = "INSERT INTO customer (name, phone_number, points, is_activated) VALUES (?,?,?,?)"; var inserts = [req.body.name, req.body.phone_number, 0, true]; mysql.pool.query(customerCheck, phoneNumberCheck, function(error, results, fields){ if(error){ console.log("Failed to verify!"); res.write(JSON.stringify(error)); res.end(); }else{ console.log(results); if(results.length){ console.log("phone number exists, not adding."); res.redirect('/view_customers'); }else{ mysql.pool.query(sql,inserts,function(error, results, fields){ if(error){ console.log("Failed to insert customer to DB!"); res.write(JSON.stringify(error)); res.end(); }else{ res.redirect('/view_customers'); } }); } } }); }); ```<issue_comment>username_1: you can write a function for it that returns a promise to simplify things, something like: ``` function checkFn(mysql, v) { return new Promise((resolve, reject) => { var customerCheck = "SELECT phone_number FROM customer WHERE phone_number=?"; var phoneNumberCheck = [v]; mysql.pool.query(customerCheck, phoneNumberCheck, function(error, results, fields){ if(error){ console.log("Failed to verify!"); reject(); // reject promise }else{ console.log(results); if(results.length){ console.log("phone number exists, not adding."); resolve(true); // resolve promise with true }else{ resolve(false); // resolve promise with false } } }) }) } ``` then you can use it with async/await ``` router.post('/', async function(req, res){ // notice: async try { let numberExist = await checkFn(req.app.get('mysql'), req.body.phone_number) if (numberExist) { // do something if number exists } else { // do something if number doesn't exist } } catch (e) { // do something if error } ``` read : [Async function](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/async_function) Upvotes: 0 <issue_comment>username_2: Here's a more organized way to do it, of course it can be improved, but I leave that to you. I added a customer class, where you have `check` and `insert` functions, now you can use them individually. I wrapped the DB query in a promise, to avoid callback hell ([Learn how to avoid it here](https://blog.risingstack.com/node-js-async-best-practices-avoiding-callback-hell-node-js-at-scale/)) Finally the router function has become really simple and easy to read. **customer.js** ``` class Customer { constructor(mysql) { this.mysql = mysql; } async check(number) { const statement = 'SELECT phone_number FROM customer WHERE phone_number=?'; const results = await this.query(statement, [number]); // If there is a customer with that phone number // we return true, false otherwise. return results.length; } insert(name, number) { const statement = 'INSERT INTO customer (name, phone_number, points, is_activated) VALUES (?,?,?,?)'; return this.query(statement, [name, number, 0, true]); } query(query, placeholders) { // We wrap the query in a promise // So we can take advantage of async/await in the other functions // To make the code easier to read avoiding callback hell return new Promise((resolve, reject) => { this.mysql.pool.query(query, placeholders, (error, results) => { if(error) return reject(error); resolve(results); }); }); } } module.exports = Customer; ``` **router** ``` const Customer = require('./customer'); // ... rest of your code router.post('/', async(req, res) => { const customer = new Customer(req.app.get('mysql')); try { // We check whether the client exists or not const check = await customer.check(req.body.phone_number); if(!check) // We insert the customer if it doesn't exists await customer.insert(req.body.name, req.body.phone_number); // After successful insert or if the customer already exists // we redirect to /view_customers res.redirect('/view_customers'); } catch(error) { // Probably a database error console.log('Oops'); res.write(JSON.stringify(error)); res.end(); } }); ``` Upvotes: 2 [selected_answer]
2018/03/17
701
2,043
<issue_start>username_0: I have a DataFrame ``` customer_number purchase_time quantity 14 2007-03-01 07:06:00 10 20 2007-03-12 13:05:00 13 ``` I tried to find the total quantity bought in the morning and afternoon. I converted `purchase_time` into datetime ``` df['purchase_time'] = pd.to_datetime(df['purchase_time']) # Baskets bought in morning. df[df['purchase_time'] < '12:00:00'] ``` However, the result is original dataset.<issue_comment>username_1: You may not require a conversion here, just compare the times lexicographically - ``` df[df['purchase_time'].str.split().str[1] < '12:00:00'] customer_number purchase_time quantity 0 14 2007-03-01 07:06:00 10 ``` Although, for an extra layer of security, I'd recommend converting to `timedelta` and comparing - these comparisons still work with strings (pandas is miraculous that way) - ``` df[pd.to_timedelta( df['purchase_time'].str.split().str[1], errors='coerce' ) < '12:00:00'] customer_number purchase_time quantity 0 14 2007-03-01 07:06:00 10 ``` Upvotes: 3 <issue_comment>username_2: You can ``` df[df['purchase_time'].dt.time < pd.to_datetime('12:00:00').time()] Out[152]: customer_number purchase_time quantity 0 14 2007-03-01 07:06:00 10 ``` Upvotes: 3 <issue_comment>username_3: Assuming that `purchase_time` is of `datetime` dtype: ``` In [88]: df.query("purchase_time.dt.hour < 12 and purchase_time.dt.month in [3,6]") Out[88]: customer_number purchase_time quantity 0 14 2007-03-01 07:06:00 10 ``` Upvotes: 3 <issue_comment>username_4: Use the Boolean array in a groupby ``` df.groupby(df.purchase_time.dt.hour < 12).sum().rename( {True: 'Morning', False: 'Afternoon'}) customer_number quantity purchase_time Afternoon 20 13 Morning 14 10 ``` Upvotes: 3 [selected_answer]
2018/03/17
764
2,166
<issue_start>username_0: I am unable to click a drop-down item. I am at: I can click the proxy location option using the code below, but I can't select any items from the drop-down; ``` driver.FindElement(By.XPath("/html/body/main/div[2]/form/fieldset/div[2]/div[1]/p")).Click(); ``` how can I click the Germany or USA option from the drop-down? I've tried this code below and it didn't work ``` var dd = driver.FindElement(By.XPath("/html/body/main/div[2]/form/fieldset/div[2]/div[1]/p")).Click(); var select = new SelectElement(dd); select.SelectByValue(" Germany"); ```<issue_comment>username_1: Without knowing the logic used by the drop down it can be hard to provide code that will work for you. But I can tell right away that it's is not a select tag and the SelectByValue won't work on it. This is where things get nasty and you have to use ActionChains. You have to find the child element that contains the text you want and then click on it. ``` dropdown = driver.FindElement(By.XPath("/html/body/main/div[2]/form/fieldset/div[2]/div[1]/p")) actions = webdriver.ActionChains(driver) actions.click(dropdown) // var childSelection = // Logic to find child elements of dropdown actions.click(childSelection).perform(); ``` Upvotes: 0 <issue_comment>username_2: You could do: ``` Select dropdown = new SelectElement(driver.findelement(By.id("dropdown"))); dropdown.selectByVisibleText("Germany"); or dropdown.selectByIndex(2); ``` Some further reading that might help: 1. [How to Select Option from DropDown using Selenium Webdriver](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0ahUKEwiojKuwyPbZAhXOqYMKHbWMB18QFggpMAA&url=https%3A%2F%2Fwww.guru99.com%2Fselect-option-dropdown-selenium-webdriver.html&usg=AOvVaw1hhBNiYnC9t9axswWBnO36) 2. [How to Select a Dropdown in Selenium WebDriver using Java](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=0ahUKEwiojKuwyPbZAhXOqYMKHbWMB18QFgg3MAE&url=https%3A%2F%2Floadfocus.com%2Fblog%2F2016%2F06%2F13%2Fhow-to-select-a-dropdown-in-selenium-webdriver-using-java%2F&usg=AOvVaw3funH5GKu4Fm5rVKDwQKPZ) Upvotes: 2 [selected_answer]
2018/03/17
953
4,012
<issue_start>username_0: Suppose I have this code: ``` public Foo { public enum Bar {FooBar, BarFoo, FooFoo, BarBar} public Foo (Bar bar) { //Do something } } ``` Now, when creating an object, I have several different cases for what migth be created: `FooBar, BarFoo, FooFoo, BarBar`. Depending on what these cases are, how might I execute a different method for each? I suppose I am looking to do something a bit like this: ``` public Foo { public enum Bar {FooBar, BarFoo, FooFoo, BarBar} public Foo (Bar bar) { switch (bar) { case FooBar: fooBar(bar); break; case BarFoo: barFoo(bar); break; case FooFoo: fooFoo(bar); break; case BarBar: barBar(bar); break; default: break; } } void fooBar (Bar bar) { //Treat this as the constructor class for case FooBar } void barFoo (Bar bar) { //Treat this as the constructor class for case barFoo } void fooFoo (Bar bar) { //Treat this as the constructor class for case fooFoo } void barBar (Bar bar) { //Treat this as the constructor class for case barBar } } ``` In addition, is this a bad practice? Would it be better to create a separate class for each case and drop the enum?<issue_comment>username_1: I'd implement a method inside the enum to execute it directly. It removes the unnecessary `switch` statement and methods. ``` public class Foo { public Foo (Bar bar) { bar.execute(); } public enum Bar { FooBar, BarFoo, FooFoo, BarBar; public void execute() { // TODO // System.out.println(this.toString()); } } } ``` If you really need an unique method for each of the enum instance, make the `execute` method abstract and `@Override` it for each one. ``` public enum Bar { FooBar { @Override public void execute() { } }, BarFoo { @Override public void execute() { } }, FooFoo { @Override public void execute() { } }, BarBar { @Override public void execute() { } }; public abstract void execute(); } ``` Upvotes: 1 <issue_comment>username_2: Move case-specific code into `enum Bar`, and use the override in the constructor: ``` public Foo { public enum Bar { FooBar { @Override public void configure(Foo obj) { ... // Code specific to this enum case } } , BarFoo { @Override public void configure(Foo obj) { ... // Code specific to this enum case } } , FooFoo { @Override public void configure(Foo obj) { ... // Code specific to this enum case } } , BarBar { @Override public void configure(Foo obj) { ... // Code specific to this enum case } } ; public abstract void configure(Foo obj); } public Foo (Bar bar) { bar.configure(this); } } ``` The code inside each override of `configure(Foo)` gets access to the instance of `Bar` being configured by the constructor, letting you replace `switch` with a virtual dispatch. Upvotes: 2 <issue_comment>username_3: Enums can also have methods so you could add abstract method and override it for every case you need, something like this: ``` public enum Bar { FooBar { @Override public void doSomething() { // your code for FooBar } }, BarFoo { @Override public void doSomething() { // your code for BarFoo } }, FooFoo { @Override public void doSomething() { // ..... } }, BarBar { @Override public void doSomething() { // ..... } }; public abstract void doSomething(); } ``` Upvotes: 0
2018/03/17
1,281
3,557
<issue_start>username_0: Currently I have a code that extracts information from an XML sheet, the problem is that I can not show the information as I want. demo.xml ``` xml version="1.0" encoding="UTF-8"? `111111` The Name The Brand 5 4 `222222` The Name The Brand 6 4 ``` index.php ``` php if (file_exists('demo.xml')) { $codes = []; $store1 = []; $xml = simplexml_load_file('demo.xml'); foreach($xml-Product as $i => $product) { $codes[] = $product->code; $store1[] = $product->quantity->store1; } echo implode($codes,','); echo " "; echo implode($store1,','); } else { exit('Error en archivo!'); } ?> ``` Result: ``` 111111,222222 5,6 ``` What I need: ``` ╔═════════════╦══════════╦══════════╗ ║ Code ║ Store 1 ║ Store 2 ║ ╠═════════════╬══════════╬══════════╣ ║111111 ║ 5 ║ 4 ║ ║222222 ║ 6 ║ 4 ║ ╚═════════════╩══════════╩══════════╝ ``` It could be shown in a table as shown above or simply in values separated by a comma to be processed later ... ``` 111111,5,4 222222,6,4 ... ``` Can anybody help me? Thank you!<issue_comment>username_1: If you change the way you build the data up in the loop, so store all the data from the line into one element of an array instead of different arrays. You can then implode() this alternative array one line at a time... ``` if (file_exists('demo.xml')) { $data = []; $xml = simplexml_load_file('demo.xml'); foreach($xml->Product as $i => $product) { $data[] = [(string)$product->code ,(string)$product->quantity->store1 ,(string)$product->quantity->store2 ]; } foreach ( $data as $line ) { echo implode(",",$line).PHP_EOL; } } else { exit('Error en archivo!'); } ``` Outputs... ``` 111111,5,4 222222,6,4 ``` Upvotes: 2 <issue_comment>username_2: You could store each "Product" data in an array. ``` $xml = simplexml_load_file('demo.xml'); $codes = []; foreach($xml->Product as $i => $product) { $data = []; $data[] = (string)$product->code; $data[] = (string)$product->quantity->store1; $data[] = (string)$product->quantity->store2; $codes[] = $data; } foreach ($codes as $code) { echo implode(',', $code) . ' '; } ``` Outputs: ``` 111111,5,4 222222,6,4 ``` if you want a table: ``` echo " ``` "; echo "╔═════════════╦══════════╦══════════╗"." "; echo "║ Code ║ Store 1 ║ Store 2 ║"." "; echo "╠═════════════╬══════════╬══════════╣"." "; foreach ($codes as $code) { echo "║"; echo sprintf("%12s ║", $code[0]); echo sprintf("%9s ║", $code[1]); echo sprintf("%9s ║", $code[2]); echo " "; } echo "╚═════════════╩══════════╩══════════╝"." "; echo " ``` "; ``` Outputs: ``` ╔═════════════╦══════════╦══════════╗ ║ Code ║ Store 1 ║ Store 2 ║ ╠═════════════╬══════════╬══════════╣ ║ 111111 ║ 5 ║ 4 ║ ║ 222222 ║ 6 ║ 4 ║ ╚═════════════╩══════════╩══════════╝ ``` Upvotes: 2 [selected_answer]<issue_comment>username_3: ``` php if (file_exists('demo.xml')) { $xml = simplexml_load_file('demo.xml'); $str = ''; foreach($xml-Product as $i => $product) { $str .= "| {$product->code} | {$product->quantity->store1} | {$product->quantity->store2} | \n"; } echo << {$str} END; } else { exit('Error en archivo!'); } ?> ``` Give you: ``` | | | | | --- | --- | --- | | 111111 | 5 | 4 | | 222222 | 6 | 4 | ``` Upvotes: 0
2018/03/17
1,153
3,313
<issue_start>username_0: I was creating a query that could create a post and add image links, hashtags and genres to it but... 1. I'm not sure if it's the good way to do it like I'm doing. 2. My query fails every time MATCH doesn't find a node and that could happen often. This is my code: ``` MATCH (user:User) WHERE ID(user) = 611 WITH user MERGE (post:Post {description: 'test', type: 'sketch'}) - [p:Post {date_upload: timestamp(), visual: 0 }] ->(user) WITH user, post MATCH (gen:Genre)<-[:IT]-(:GenreLang {name: 'Drammatico'}) MERGE (post)-[:HAS_GENRE]->(gen) WITH user, post MATCH (gen:Genre)<-[:IT]-(:GenreLang {name: 'Azione'}) MERGE (post)-[:HAS_GENRE]->(gen) WITH user, post MATCH (post)-[:HAS_GENRE]->(g)<-[:IT]-(r) RETURN user, post, r.name ``` I create all those match merge with a foreach in Node.js, depending on how many genres or hashtags they give me I need to create more queries.<issue_comment>username_1: If you change the way you build the data up in the loop, so store all the data from the line into one element of an array instead of different arrays. You can then implode() this alternative array one line at a time... ``` if (file_exists('demo.xml')) { $data = []; $xml = simplexml_load_file('demo.xml'); foreach($xml->Product as $i => $product) { $data[] = [(string)$product->code ,(string)$product->quantity->store1 ,(string)$product->quantity->store2 ]; } foreach ( $data as $line ) { echo implode(",",$line).PHP_EOL; } } else { exit('Error en archivo!'); } ``` Outputs... ``` 111111,5,4 222222,6,4 ``` Upvotes: 2 <issue_comment>username_2: You could store each "Product" data in an array. ``` $xml = simplexml_load_file('demo.xml'); $codes = []; foreach($xml->Product as $i => $product) { $data = []; $data[] = (string)$product->code; $data[] = (string)$product->quantity->store1; $data[] = (string)$product->quantity->store2; $codes[] = $data; } foreach ($codes as $code) { echo implode(',', $code) . ' '; } ``` Outputs: ``` 111111,5,4 222222,6,4 ``` if you want a table: ``` echo " ``` "; echo "╔═════════════╦══════════╦══════════╗"." "; echo "║ Code ║ Store 1 ║ Store 2 ║"." "; echo "╠═════════════╬══════════╬══════════╣"." "; foreach ($codes as $code) { echo "║"; echo sprintf("%12s ║", $code[0]); echo sprintf("%9s ║", $code[1]); echo sprintf("%9s ║", $code[2]); echo " "; } echo "╚═════════════╩══════════╩══════════╝"." "; echo " ``` "; ``` Outputs: ``` ╔═════════════╦══════════╦══════════╗ ║ Code ║ Store 1 ║ Store 2 ║ ╠═════════════╬══════════╬══════════╣ ║ 111111 ║ 5 ║ 4 ║ ║ 222222 ║ 6 ║ 4 ║ ╚═════════════╩══════════╩══════════╝ ``` Upvotes: 2 [selected_answer]<issue_comment>username_3: ``` php if (file_exists('demo.xml')) { $xml = simplexml_load_file('demo.xml'); $str = ''; foreach($xml-Product as $i => $product) { $str .= "| {$product->code} | {$product->quantity->store1} | {$product->quantity->store2} | \n"; } echo << {$str} END; } else { exit('Error en archivo!'); } ?> ``` Give you: ``` | | | | | --- | --- | --- | | 111111 | 5 | 4 | | 222222 | 6 | 4 | ``` Upvotes: 0
2018/03/17
1,001
2,937
<issue_start>username_0: I'm very much used Lua where you are able to do this, so please excuse me if this is not possible. Lets say on client.js I have 2 variables, both of these requiring modules ``` var EmbedManager = require('Embed'); var client = require('Client'); new EmbedManager() .init() .output() ``` from the module 'Embed' I need to be able to access the variable 'client' without passing anything as an argument. For the purpose of example, the files are stored like so; ``` Client.js Embed.js ```<issue_comment>username_1: If you change the way you build the data up in the loop, so store all the data from the line into one element of an array instead of different arrays. You can then implode() this alternative array one line at a time... ``` if (file_exists('demo.xml')) { $data = []; $xml = simplexml_load_file('demo.xml'); foreach($xml->Product as $i => $product) { $data[] = [(string)$product->code ,(string)$product->quantity->store1 ,(string)$product->quantity->store2 ]; } foreach ( $data as $line ) { echo implode(",",$line).PHP_EOL; } } else { exit('Error en archivo!'); } ``` Outputs... ``` 111111,5,4 222222,6,4 ``` Upvotes: 2 <issue_comment>username_2: You could store each "Product" data in an array. ``` $xml = simplexml_load_file('demo.xml'); $codes = []; foreach($xml->Product as $i => $product) { $data = []; $data[] = (string)$product->code; $data[] = (string)$product->quantity->store1; $data[] = (string)$product->quantity->store2; $codes[] = $data; } foreach ($codes as $code) { echo implode(',', $code) . ' '; } ``` Outputs: ``` 111111,5,4 222222,6,4 ``` if you want a table: ``` echo " ``` "; echo "╔═════════════╦══════════╦══════════╗"." "; echo "║ Code ║ Store 1 ║ Store 2 ║"." "; echo "╠═════════════╬══════════╬══════════╣"." "; foreach ($codes as $code) { echo "║"; echo sprintf("%12s ║", $code[0]); echo sprintf("%9s ║", $code[1]); echo sprintf("%9s ║", $code[2]); echo " "; } echo "╚═════════════╩══════════╩══════════╝"." "; echo " ``` "; ``` Outputs: ``` ╔═════════════╦══════════╦══════════╗ ║ Code ║ Store 1 ║ Store 2 ║ ╠═════════════╬══════════╬══════════╣ ║ 111111 ║ 5 ║ 4 ║ ║ 222222 ║ 6 ║ 4 ║ ╚═════════════╩══════════╩══════════╝ ``` Upvotes: 2 [selected_answer]<issue_comment>username_3: ``` php if (file_exists('demo.xml')) { $xml = simplexml_load_file('demo.xml'); $str = ''; foreach($xml-Product as $i => $product) { $str .= "| {$product->code} | {$product->quantity->store1} | {$product->quantity->store2} | \n"; } echo << {$str} END; } else { exit('Error en archivo!'); } ?> ``` Give you: ``` | | | | | --- | --- | --- | | 111111 | 5 | 4 | | 222222 | 6 | 4 | ``` Upvotes: 0
2018/03/17
328
1,201
<issue_start>username_0: I am trying to create a table that includes a column that contains just the current date-not timestamp-using the MariaDB base. Which is the command that sets the content of the column as the current date upon table creation?<issue_comment>username_1: MariaDB introduced `DEFAULTs` for `DATETIME` in 10.0.1. See [worklog](https://jira.mariadb.org/browse/MDEV-452) . It is patterned after the [MySQL 5.6.5 feature](https://dev.mysql.com/doc/refman/5.6/en/timestamp-initialization.html) , which has examples. If you want to default to only `CURDATE()`, I think you are out of luck in MariaDB and MySQL. A `BEFORE INSERT` `TRIGGER` with `SET NEW.datetime_col = CURDATE()` should give you the functionality: ``` CREATE TRIGGER set_date BEFORE INSERT ON tbl FOR EACH ROW SET NEW.datetime_col = CURDATE(); ``` Upvotes: 0 <issue_comment>username_2: Why on table create, as there will be no data, if you need a date and not datetime add to code, WELL before adding a trigger, keep business logic at the business layer not at storage layer. Not your exact answer to the specific question but hopefully helpful advice, btw I don’t like triggers, so am a bit biased Upvotes: 1
2018/03/17
520
1,691
<issue_start>username_0: I have a function with an input parameter "text" , which consists of a string containing unknown number of 2-character Hexadecimal numbers separated by space. I need to read them and put them in separate array indexes or read them in for loop and work with them one by one. Each of these values represents an encrypted char(the hex values are ascii values of the characters). How could i do that? I think i should use sscanf(); but i can't figure out how to do it. ``` char* bit_decrypt(const char* text){ int size=strlen(text)/3; unsigned int hex[size]; char*ptr; for(int i=0; i ``` --- ``` output is: "80 80 80 80 80 80 80 80 80 80 80 80" should be: "80 9c 95 95 96 11 bc 96 b9 95 9d 10" ```<issue_comment>username_1: MariaDB introduced `DEFAULTs` for `DATETIME` in 10.0.1. See [worklog](https://jira.mariadb.org/browse/MDEV-452) . It is patterned after the [MySQL 5.6.5 feature](https://dev.mysql.com/doc/refman/5.6/en/timestamp-initialization.html) , which has examples. If you want to default to only `CURDATE()`, I think you are out of luck in MariaDB and MySQL. A `BEFORE INSERT` `TRIGGER` with `SET NEW.datetime_col = CURDATE()` should give you the functionality: ``` CREATE TRIGGER set_date BEFORE INSERT ON tbl FOR EACH ROW SET NEW.datetime_col = CURDATE(); ``` Upvotes: 0 <issue_comment>username_2: Why on table create, as there will be no data, if you need a date and not datetime add to code, WELL before adding a trigger, keep business logic at the business layer not at storage layer. Not your exact answer to the specific question but hopefully helpful advice, btw I don’t like triggers, so am a bit biased Upvotes: 1
2018/03/17
738
2,451
<issue_start>username_0: Have the list of lists "d" like that: ``` [OrderedDict([('id', '1'), ('name', 'Jack'), ('email', '<EMAIL>'), OrderedDict([('id', '2'), ('name', 'Ricky'), ('email', '<EMAIL>')] ``` I would like to save the output using csv module but without id line (so with columns Name, Email and their values). Performing like this: ``` path='/..' fields=['Name','Email'] with open(path, 'w') as f: writer = csv.writer(f) writer.writerow(fields) for item in new_d: writer.writerow([d[1], d[2]]) ``` So it saves me the whole line as a value. How is it possible to get deeper to the level of values of the list and save them properly in csv?<issue_comment>username_1: An alternative can be to use `pandas`: ``` In [4]: data = [OrderedDict([('id', '1'), ...: ('name', 'Jack'), ...: ('email', '<EMAIL>')]), ...: OrderedDict([('id', '2'), ...: ('name', 'Ricky'), ...: ('email', '<EMAIL>')])] In [5]: df = pd.DataFrame(data) In [6]: df_ = df.drop('id', axis=1) In [7]: df_.to_csv('data.csv', header=True, index=False) In [8]: !head data.csv name,email Jack,<EMAIL> Ricky,<EMAIL> ``` Upvotes: 2 <issue_comment>username_2: You are almost there: ``` from collections import OrderedDict import csv listOfDicts = [OrderedDict([('id', '1'), ('name', 'Jack'), ('email', '<EMAIL>')]), OrderedDict([('id', '2'), ('name', 'Ricky'), ('email', '<EMAIL>')])] path='someFilename.csv' fields=['Name','Email'] with open(path, 'w', newline="", encoding="utf8") as f: writer = csv.writer(f) writer.writerow(fields) for d in listOfDicts: # d are your different dictionaries writer.writerow([d['name'], d['email']]) ``` creates the file : ``` Name,Email Jack,<EMAIL> Ricky,<EMAIL> ``` It is important to `open(..)` the file with `newline=""` so you do not get additional newlines into it - `csv` will handle those on its own - you also should specifiy the `encoding` - just to make sure. I fixed some other indentation and minor parenthesis errors on your example data on the way and provided the needed imports to get a working example. [csv-writer](https://docs.python.org/3/library/csv.html#csv.writer) Upvotes: 3 [selected_answer]
2018/03/17
1,311
4,851
<issue_start>username_0: So, without boring anyone with the backstory, I need to access data from a number of APIs in order to run my script. The data needs to all be loaded before I execute the script, which I'm normally comfortable doing: I just declare some fetch requests, write a Promise.all, then continue on with the function. HOWEVER, I've hit something of a snafu with a certain API that limits the number of results I can pull from one request to 100 and I need to query all of the results. I didn't think this was a huge deal since I figured I can just make a couple extra requests by affixing "&page=X" to the end of the request. The plan, then, is to request the total number of pages from the API and then feed that into a for loop to push a number of fetch requests into an array of promises (i.e., link://to/api/data&page=1, link://to/api/data&page=2, etc). When I actually attempt to create this array with a for loop, though, the array returns empty. Here's my work: ``` const dataUrlNoPage = 'link/to/api/data&page='; const totalPages = 3; //usually pulled via a function, but just using a static # for now let apiRequestLoop = function(inp) { return new Promise(function(resolve){ let promiseArray = []; for (let i = 1; i <= inp; i++) { let dataUrlLoop = dataUrlNoPage + i; fetch(dataUrlLoop).then(function(response){ promiseArray.push(response.json()); }) } resolve(promiseArray); }) } let finalPromiseArray = apiRequestLoop(totalPages).then(result => { let requestArray = [apiRequest1,apiRequest2]; //requestArray contains earlier fetch promises result.forEach(foo => { requestArray.push(foo); } ); return requestArray; }); ``` So, what's tripping me up is really the loop, and how it's not returning an array of promises. When I look at it in the console, it shows up as a blank array, but I can expand it and see the promises I was expecting. I see the "Value below was evaluated just now" response. No matter how many promises or .thens, I write, however, the array is never actually populated at run time. What's going on? Can I not generate fetch promises via a for loop? (Also, just to cut this line of questioning off a bit, yes, the API I'm trying to access is Wordpress. Looking around, most people suggest creating a custom endpoint, but let's assume for the purpose of this project I am forbidden from doing that.)<issue_comment>username_1: A few points: * you want to actually put the promises themselves into the array, not push to the array in a `.then()` chained to the promise. * you probably want to skip creating a new Promise in your function. Just get an array of all the promises from your loop, then return a `Promise.all` on the array. Like this: ``` let apiRequestLoop = function(inp) { let promiseArray = []; for (let i = 1; i <= inp; i++) { let dataUrlLoop = dataUrlNoPage + i; promiseArray.push(fetch(dataUrlLoop)) } return Promise.all(promiseArray); } ``` In your final `.then` statement, in finalPromiseArray, your result will be an array of the results from all the promises. like this `[response1, response2, response3, ...]` See the [Promise.all documentation](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/all) for more details. Upvotes: 2 <issue_comment>username_2: You have several problems here. The first is that you have the function provided to `new Promise` itself containing promise creations. Don't do this! It's a definite anti-pattern and doesn't keep your code clean. The second is this basic bit of code: ``` let promiseArray = []; for (let i = 1; i <= inp; i++) { let dataUrlLoop = dataUrlNoPage + i; fetch(dataUrlLoop).then(function(response){ promiseArray.push(response.json()); }) } resolve(promiseArray); ``` This says: 1. create an empty array 2. loop through another array, doing HTTP requests 3. resolve your promise with the empty array 4. when the HTTP requests are completed, add them to the array Step four will always come after step three. So, you need to add the *promises* to your array as you go along, and have the overall promise resolve when they are all complete. ``` let apiRequestLoop = function(inp) { let promiseArray = []; for (let i = 1; i <= inp; i++) { let dataUrlLoop = dataUrlNoPage + i; promiseArray.push(fetch(dataUrlLoop).then(function(response) { return response.json(); })); } return Promise.all(promiseArray); } ``` or, with an arrow function to clean things up: ``` let apiRequestLoop = function(inp) { let promiseArray = []; for (let i = 1; i <= inp; i++) { let dataUrlLoop = dataUrlNoPage + i; promiseArray.push(fetch(dataUrlLoop).then(response => response.json())); } return Promise.all(promiseArray); } ``` Upvotes: 4 [selected_answer]
2018/03/17
359
1,269
<issue_start>username_0: So, i think this should be pretty simple but i can't seem to get it right, say i have an empty div: This div gets filled dynamically with data from database with ajax, i want on button click to empty this div but keep one element with a specific id ex: , i tried: ``` $(document).on('click', '#button', function(){ $("#mainDiv > *:not('#divToKeep')").empty() }) ``` Now this dose empty everything but keeps the empty divs there, i want to remove everything inside `#mainDiv` but the `#divToKeep` element.<issue_comment>username_1: Get all the mainDiv, then get all elements inside it using children except the div you want to keep, then call remove: ``` $("#mainDiv").children(":not('#divToKeep')").remove(); ``` Check this fiddle: <https://jsfiddle.net/yzfw8atp/2/> Upvotes: 3 [selected_answer]<issue_comment>username_2: This way, it put the divToKeep on the top level, then remove everything else inside. ```js $('#divToKeep').appendTo('#mainDiv'); // move #divToKeep up to the body $('#mainDiv *:not(#divToKeep):not(#divToKeep *)').remove(); // remove everything except #divToKeep and inner children ``` ```html ``` Upvotes: 0 <issue_comment>username_3: ``` $("#mainDiv").children().not("#divToKeep").remove(); ``` Upvotes: 0
2018/03/17
1,272
4,897
<issue_start>username_0: I have a small problem. On my wordpress based website I want to create an Order-Status form where people can use a code and check their order's progress. It's not an online store so I don't use woocommerce. The file containing the order's progress is a CSV file. I tried to use that through a function but didn't work. I even tried Javascript but my code can't find the file on server :( My question is: What language and what technique should I use for my need. Thank a lot guyz.<issue_comment>username_1: I actually used [PHPExcel](https://github.com/PHPOffice/PHPExcel) in a previous project that achieved what you're after, though I see it's been deprecated in favour of [PhpSpreadsheet library](https://github.com/PHPOffice/PhpSpreadsheet). So I would use that one! What both libraries can essentially do (amongst many other things) is parse the spreadsheet and return the relevant information based on what you request. So you could put your spreadsheet in a separate directory which you then use the PhpSpreadsheet library to extract information from - and present to the customer in whatever way you see fit. Upvotes: 0 <issue_comment>username_2: I think this is what you are looking for without the need of any libraries: First You create a form (the form action can be your homepage since we will be listening for the $\_GET parameters on `init`, which is run on every page load): ``` ``` Than you need to add an action on `init` in functions.php to listen for the get parameter `csv_export` in order to start your functionality to prepare the csv file and output it for download: (we are using the exit(); function after we create csv to make sure that nothing else runs after this process.) ``` function check_for_export() { if ( isset( $_GET['csv_export'], $_GET['code'] ) && $_GET['csv_export'] == 'order_status' ) { ob_end_clean(); export_order_status_csv( $_GET['code'] ); exit(); } } } add_action('init', 'check_for_export'); ``` Now you can start the functionality to generate the csv report. This function depends on how you are fetching the data but you can follow this example to set the headers and output the report: ``` function export_order_status_csv( $code ) { // Make a DateTime object and get a time stamp for the filename $date = new DateTime(); $ts = $date->format( "Y-m-d-G-i-s" ); // A name with a time stamp, to avoid duplicate filenames $filename = "order-statuses-export-$ts.csv"; // Tells the browser to expect a CSV file and bring up the // save dialog in the browser header( 'Pragma: public' ); header( 'Expires: 0' ); header( 'Cache-Control: must-revalidate, post-check=0, pre-check=0' ); header( 'Content-Description: File Transfer' ); header( 'Content-Type: text/csv' ); header( 'Content-Disposition: attachment;filename=' . $filename ); header( 'Content-Transfer-Encoding: binary' ); // This opens up the output buffer as a "file" $fp = fopen( 'php://output', 'w' ); //This needs to be customised from your end, I am doing a WP_Query for this example $results = new WP_Query(); if ( $results->have_posts() ) { //This is set to avoid issues with special characters $bom = ( chr( 0xEF ) . chr( 0xBB ) . chr( 0xBF ) ); //add BOM to fix UTF-8 in Excel fputs( $fp, $bom ); // Set the headers of the csv fputcsv( $fp, [ 'orderID', 'orderDate', 'orderTotal' ] ); while ( $results->have_posts() ) { $results->the_post(); //Here we are inserting the row data per result fetched fputcsv( $fp, [ get_the_ID(), get_the_date(), 'your_custom_data' ] ); } wp_reset_query(); // Close the output buffer (Like you would a file) fclose( $fp ); } else { fputcsv( $fp, [ 'No Results' ] ); fclose( $fp ); } } ``` **When exporting a csv you have to hook to an action which is processed before anything has been outputted. There cannot be any output before creating the csv file since we are updating the headers.** If you want to read the data from a csv file and manipulate it to customise your csv export you can use this function instead of the WP\_Query in example above: <http://php.net/manual/en/function.fgetcsv.php> ``` $row = 1; if (($handle = fopen("test.csv", "r")) !== FALSE) { while (($data = fgetcsv($handle, 1000, ",")) !== FALSE) { $num = count($data); echo " $num fields in line $row: \n"; $row++; for ($c=0; $c < $num; $c++) { echo $data[$c] . " \n"; } } fclose($handle); } ``` Upvotes: 1
2018/03/17
279
891
<issue_start>username_0: I have uploaded my HTML files to my website. Now i need to find a way to redirect users to a Home.html page when they enters www.mywebsite.com i have tried to to this in htaccess but it did not work, i am not sure if it is correct ``` RewriteEngine On #if request is mywebsite.co.uk/something OR RewriteCond %{HTTP_HOST} ^mywebsite\.co.uk$ [OR] # or www. mywebsite.co.uk/something RewriteCond %{HTTP_HOST} ^www\.mywebsite\.co.uk$ # but not mywebsite.co.uk/Home.html RewriteCond %{REQUEST_URI} !^/Home.html/ #change it to mywebsite.co.uk/Home.html/something RewriteRule (.*) /Home.html/$1 [R=301] ```<issue_comment>username_1: Remove your code and put this only @ `.haccess` file : ``` DirectoryIndex Home.html ``` Upvotes: 1 <issue_comment>username_2: Make sure the .htaccess file is in the directory where you want to change the index.html file Upvotes: 0
2018/03/17
523
1,552
<issue_start>username_0: I have 3 tables: * customers(name, id), * items(name, price, id), * purchases(id, items\_id, customers\_id) When I run: ``` select customers.name, SUM(items.price) as "price" from items INNER JOIN purchases ON items.id = purchases.item_id INNER JOIN customers ON purchases.customer_id = customers.id GROUP BY customers.name ORDER BY "price" ``` This is the result that I get: ``` "<NAME>" "5.00" "<NAME>" "11.30" "<NAME>" "101.29" "<NAME>" "899.00" "<NAME>" "1174.50" ``` However, ​I would like to only show those rows, which have price over 100 (3 bottom ones in this case). How can I do that?<issue_comment>username_1: This is where you use a `HAVING` clause. It's similar to a `WHERE` clause, but applied after the `GROUP BY`. ``` HAVING SUM(items.price) > 100 ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: If you want only SUM(prices) greater than 100 include a HAVING clause as below. ``` select customers.name, SUM(items.price) as "price" from items INNER JOIN purchases ON items.id = purchases.item_id INNER JOIN customers ON purchases.customer_id = customers.id GROUP BY customers.name having sum(items.price) >100 ``` Upvotes: 1 <issue_comment>username_3: Here is a solution with the WHERE function ``` select customers.name, SUM(items.price) as price from items, purchases, customers where items.id = purchases.item_id and purchases.customer_id = customers.id and items.price > 100 GROUP BY customers.name ORDER BY price; ``` Upvotes: 0
2018/03/17
3,468
7,263
<issue_start>username_0: I have a dataframe in which each row is the working hours of an employee defined by a start and a stop time: ``` DF < - EmployeeNum Start_datetime End_datetime 123 2012-02-01 07:30:00 2012-02-01 17:45:00 342 2012-02-01 08:00:00 2012-02-01 17:45:00 876 2012-02-01 10:45:00 2012-02-01 18:45:00 ``` I'd like to find the number of employees working during each hour on each day in a timespan: ``` Date Hour NumberofEmployeesWorking 2012-02-01 00:00 ? (number of employees working between 00:00 and 00:59) 2012-02-01 01:00 ? 2012-02-01 02:00 ? 2012-02-01 03:00 ? 2012-02-01 04:00 ? 2012-02-01 05:00 ? 2012-02-01 06:00 ? ``` How do I put my working hours into bins like this?<issue_comment>username_1: Your data, in a more consumable format, plus one row to span midnight (for example). I changed the format to include a "T" here, to make consumption easier, otherwise the middle space makes it less trivial to do it with `read.table(text='...')`. (You can skip this since you already have your real data.) ``` x <- read.table(text='EmployeeNum Start_datetime End_datetime 123 2012-02-01T07:30:00 2012-02-01T17:45:00 342 2012-02-01T08:00:00 2012-02-01T17:45:00 876 2012-02-01T10:45:00 2012-02-01T18:45:00 877 2012-02-01T22:45:00 2012-02-02T05:45:00', header=TRUE, stringsAsFactors=FALSE) ``` In case you haven't done it with your own data, convert all times to `POSIXt`, otherwise skip this, too. ``` x[c('Start_datetime','End_datetime')] <- lapply(x[c('Start_datetime','End_datetime')], as.POSIXct, format='%Y-%m-%dT%H:%M:%S') ``` We need to generate a sequence of hourly timestamps: ``` startdate <- trunc(min(x$Start_datetime), units = "hours") enddate <- round(max(x$End_datetime), units = "hours") c(startdate, enddate) # [1] "2012-02-01 07:00:00 PST" "2012-02-02 06:00:00 PST" timestamps <- seq(startdate, enddate, by = "hour") head(timestamps) # [1] "2012-02-01 07:00:00 PST" "2012-02-01 08:00:00 PST" "2012-02-01 09:00:00 PST" # [4] "2012-02-01 10:00:00 PST" "2012-02-01 11:00:00 PST" "2012-02-01 12:00:00 PST" ``` (Assumption: all end timestamps are *after* their start timestamps ...) Now it's just a matter of tallying: ``` counts <- mapply(function(st,en) sum(st <= x$End_datetime & x$Start_datetime <= en), timestamps[-length(timestamps)], timestamps[-1]) data.frame( start = timestamps[ -length(timestamps) ], count = counts ) # start count # 1 2012-02-01 07:00:00 2 # 2 2012-02-01 08:00:00 2 # 3 2012-02-01 09:00:00 2 # 4 2012-02-01 10:00:00 3 # 5 2012-02-01 11:00:00 3 # 6 2012-02-01 12:00:00 3 # 7 2012-02-01 13:00:00 3 # 8 2012-02-01 14:00:00 3 # 9 2012-02-01 15:00:00 3 # 10 2012-02-01 16:00:00 3 # 11 2012-02-01 17:00:00 3 # 12 2012-02-01 18:00:00 1 # 13 2012-02-01 19:00:00 0 # 14 2012-02-01 20:00:00 0 # 15 2012-02-01 21:00:00 0 # 16 2012-02-01 22:00:00 1 # 17 2012-02-01 23:00:00 1 # 18 2012-02-02 00:00:00 1 # 19 2012-02-02 01:00:00 1 # 20 2012-02-02 02:00:00 1 # 21 2012-02-02 03:00:00 1 # 22 2012-02-02 04:00:00 1 # 23 2012-02-02 05:00:00 1 ``` Upvotes: 2 <issue_comment>username_2: I did not see @username_1 answer before posting. I came up with this independently, though it looks similar. I posted it here, so it may be helpful. Feel free to accept @username_1 answer. Data: ``` df1 <- read.table(text="EmployeeNum Start_datetime End_datetime 123 '2012-02-01 07:30:00' '2012-02-01 17:45:00' 342 '2012-02-01 08:00:00' '2012-02-01 17:45:00' 876 '2012-02-01 10:45:00' '2012-02-01 18:45:00'", header = TRUE ) df1 <- within(df1, Start_datetime <- as.POSIXct( Start_datetime)) df1 <- within(df1, End_datetime <- as.POSIXct( End_datetime)) ``` Code: Find datetime sequence by 1 hour for each employee and count the number by `Start_datetime`. Also, with this code, it is assumed that you separate original data by each single day and then apply the following code. If your data has multiple days mixed in it, with `IDateTime()` function from `data.table` package, it is possible to separate days from time and group by them while making the datetime sequence. ``` library('data.table') setDT(df1) # assign data.table class by reference df2 <- df1[, Map( f = function(x, y) seq( from = trunc(x, "hour"), to = round(y, "hour"), by = "1 hour" ), x = Start_datetime, y = End_datetime ), by = EmployeeNum ] colnames(df2)[ colnames(df2) == "V1" ] <- "Start_datetime" # for some reason I can't assign column name properly during the column creation step. ``` Output: ``` df2[, .N, by = .( Start_datetime, End_datetime = Start_datetime + 3599 ) ] # Start_datetime End_datetime N # 1: 2012-02-01 07:00:00 2012-02-01 07:59:59 1 # 2: 2012-02-01 08:00:00 2012-02-01 08:59:59 2 # 3: 2012-02-01 09:00:00 2012-02-01 09:59:59 2 # 4: 2012-02-01 10:00:00 2012-02-01 10:59:59 3 # 5: 2012-02-01 11:00:00 2012-02-01 11:59:59 3 # 6: 2012-02-01 12:00:00 2012-02-01 12:59:59 3 # 7: 2012-02-01 13:00:00 2012-02-01 13:59:59 3 # 8: 2012-02-01 14:00:00 2012-02-01 14:59:59 3 # 9: 2012-02-01 15:00:00 2012-02-01 15:59:59 3 # 10: 2012-02-01 16:00:00 2012-02-01 16:59:59 3 # 11: 2012-02-01 17:00:00 2012-02-01 17:59:59 3 # 12: 2012-02-01 18:00:00 2012-02-01 18:59:59 3 # 13: 2012-02-01 19:00:00 2012-02-01 19:59:59 1 ``` Graph: `binwidth = 3600` the value indicates 1 hour = 60 min \* 60 sec = 3600 seconds ``` library('ggplot2') ggplot( data = df2, mapping = aes( x = Start_datetime ) ) + geom_histogram(binwidth = 3600, color = "red", fill = "white" ) + scale_x_datetime( date_breaks = "1 hour", date_labels = "%H:%M" ) + ylab("Number of Employees") + xlab( "Working Hours: 2012-02-01" ) + theme( axis.text.x = element_text(angle = 45, hjust = 1), panel.grid = element_blank(), panel.background = element_rect( fill = "white", color = "black") ) ``` [![enter image description here](https://i.stack.imgur.com/E8EbS.png)](https://i.stack.imgur.com/E8EbS.png) Upvotes: 2 [selected_answer]<issue_comment>username_3: Thank you both for your answers. I came up with a solution which is pretty similar to yours, but I was wondering if you could have a look and let me know what you think of it. I started a new empty dataframe, and then made two nested loops, to look at each start and end time in each row, and generate a sequence of hours in between. Then I each hour in the sequence to the new empty dataframe. This way, I can simply do a count later. ``` staffDetailHours <- data.frame("personnelNum"=integer(0), "workDate"=character(0), "Hour"=integer(0)) for (i in 1:dim(DF)[1]){ hoursList <- seq(as.POSIXlt(DF[i,]$START)$hour, as.POSIXlt(DF[i,]$END)$hour) for (j in 1:length(hoursList)) { staffDetailHours[nrow(staffDetailHours)+1,] = list( DF[i,]$EmployeeNum, DF[i,]$Date, hoursList[j] ) } } ``` Upvotes: 0
2018/03/17
654
1,814
<issue_start>username_0: Attempting to build an application based on OpenCV, I see two lines warning me of missing libraries for `libjpeg` and `libpng`. Here is an extract of the console output, which then goes on with a number of similar Qt-related issues. ``` /usr/bin/ld: warning: libjpeg.so.9, needed by /usr/local/lib/libopencv_imgcodecs.so, not found (try using -rpath or -rpath-link) /usr/bin/ld: warning: libpng16.so.16, needed by /usr/local/lib/libopencv_imgcodecs.so, not found (try using -rpath or -rpath-link) /usr/local/lib/libopencv_highgui.so: undefined reference to `QGraphicsView::viewportEvent(QEvent*)@Qt_5' /usr/local/lib/libopencv_highgui.so: undefined reference to `QAbstractSlider::value() const@Qt_5' ``` Researching a bit around, I found out how to show which libraries and versions I have installed on my system using `ldconfig -p | grep libjpeg` and `ldconfig -p | grep libpng`. They gave me the following: ``` libjpeg.so.8 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libjpeg.so.8 libjpeg.so (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libjpeg.so libpng12.so.0 (libc6,x86-64) => /lib/x86_64-linux-gnu/libpng12.so.0 ``` and ``` libpng12.so.0 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libpng12.so.0 libpng12.so (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libpng12.so ``` I clearly see I have an issue. The libraries installed on my computer are older than those required by OpenCV. However, I tried `sudo apt install libjpeg-dev`, and to `sudo apt update` but I haven't found the newer version being available.<issue_comment>username_1: This is how to install `libjpeg9` on Ubuntu 16.04: ``` sudo apt install libjpeg9-dev ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: For installing libjpeg9 Ubuntu 16.04 ``` $ sudo apt-get install libjpeg9-dev ``` Upvotes: 0
2018/03/17
936
2,587
<issue_start>username_0: I found this snippet of code online and am having difficulty in understanding what each part of it is doing as I'm not proficient in Python. > > The following routine takes an array as input and returns a dictionary that maps each unique value to its indices > > > ``` def partition(array): return {i: (array == i).nonzero()[0] for i in np.unique(array)} ```<issue_comment>username_1: Trace each part out, this should speak for itself. Comments inlined. ``` In [304]: array = np.array([1, 1, 2, 3, 2, 1, 2, 3]) In [305]: np.unique(array) # unique values in `array` Out[305]: array([1, 2, 3]) In [306]: array == 1 # retrieve a boolean mask where elements are equal to 1 Out[306]: array([ True, True, False, False, False, True, False, False]) In [307]: (array == 1).nonzero()[0] # get the `True` indices for the operation above Out[307]: array([0, 1, 5]) ``` In summary; the code is creating a mapping of - ``` In [308]: {i: (array == i).nonzero()[0] for i in np.unique(array)} Out[308]: {1: array([0, 1, 5]), 2: array([2, 4, 6]), 3: array([3, 7])} ``` And here's the slightly more readable version - ``` In [313]: mapping = {} ...: for i in np.unique(array): ...: mapping[i] = np.where(array == i)[0] ...: In [314]: mapping Out[314]: {1: array([0, 1, 5]), 2: array([2, 4, 6]), 3: array([3, 7])} ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: * `array == i` Return a boolean array of True whenever the value is equal to i and False otherwise. * `nonzero()` Return the indices of the elements that are non-zero(not False). <https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.nonzero.html> * `nonzero()[0]` Return the first index where array[index] = i. * `for i in np.unique(array)` Iterate over all the unique values of array or in other words do the logic foreach value of unique value of the array. Upvotes: 2 <issue_comment>username_3: consider also the following Pandas solution: ``` import pandas as pd In [165]: s = pd.Series(array) In [166]: d = s.groupby(s).groups In [167]: d Out[167]: {1: Int64Index([0, 1, 5], dtype='int64'), 2: Int64Index([2, 4, 6], dtype='int64'), 3: Int64Index([3, 7], dtype='int64')} ``` PS `pandas.Int64Index` - supports all methods and indexing like a regular 1D numpy array it can be easily converted to Numpy array: ``` In [168]: {k:v.values for k,v in s.groupby(s).groups.items()} Out[168]: {1: array([0, 1, 5], dtype=int64), 2: array([2, 4, 6], dtype=int64), 3: array([3, 7], dtype=int64)} ``` Upvotes: 2
2018/03/17
438
1,662
<issue_start>username_0: In JavaScript tutorial on Redux, there is a following JavaScript construct: ``` export default connect(mapDispatchToProps)(RemoveCounter); ``` Without going into depths of Redux (unless necessary), and I think `connect(..)` is a function call, how it is possible to specify the parameter twice, and how this is understood? If the complete code is required, it can be found [here](https://appdividend.com/2017/08/23/redux-tutorial-example-scratch/#Reducers).<issue_comment>username_1: `connect()` returns a function, so the second set of parentheses is just calling the function that was returned. `RemoveCounter` is passed as the argument to it. Upvotes: 1 <issue_comment>username_2: The function `connect()` itself returns a function. The returned function can then have arguments passed into it. ```js const add = (num1) => ( (num2) => num1 + num2 ) console.log( add(1)(2) ) ``` Upvotes: 1 <issue_comment>username_3: Nothing specifically ES6 about this-- you are looking at a function that itself returns another function. For example: ``` function addXMaker(x) { return function (num) { return num + x; } } addXMaker(10)(5); // returns 15 ``` This is perfectly valid in ES5, possible because in JavaScript functions are first-class citizens. [`connect`](https://github.com/reactjs/react-redux/blob/master/docs/api.md#connectmapstatetoprops-mapdispatchtoprops-mergeprops-options) accepts the config to map in state and dispatch, and returns a higher-order component, into which you can pass the component that you would like to receive the interface the HOC will inject. Upvotes: 3 [selected_answer]
2018/03/17
742
2,643
<issue_start>username_0: Helo. I am beginner in this JavaScript journey, and i just hit a wall. I am learning .filter() function on Array. My exercise is: > > Return only the rows in the matrix that have all positive integers > > > I have no problems with single arrays. For example my code with single array: ``` function positiveRowsOnly (array) { var result = array.filter(function (ind) { return ind < 0; }); return result; }; console.log(positiveRowsOnly([1, 10, -100, 2, -20, 200, 3, 30, 300])); ``` its quite simple for me to understand that "ind" in .filter will accept every index from given array and check if ind < 0. What i am struggling is if i have double arrays. Original exercise is with double arrays. ``` function positiveRowsOnly (array) { var result = array.filter(function (ind) { return ind < 0; }); return result; }; console.log(positiveRowsOnly([[1, 10, -100 ], [ 2, -20, 200 ], [ 3, 30, 300 ]])); ``` On internet i just can not find any deeper meaning how .filter() works: does filter go in one array and gets each index? Does "ind" gets just the first array and not first array index? I was looking at Math functions or indexOf but no luck. I hope you understand my struggle. Can anyone explain how this can be done or most important, how does .filter work in double arrays? In pseudo code i know, look in array index, if it has a negative number than ignore, else return that array.<issue_comment>username_1: `connect()` returns a function, so the second set of parentheses is just calling the function that was returned. `RemoveCounter` is passed as the argument to it. Upvotes: 1 <issue_comment>username_2: The function `connect()` itself returns a function. The returned function can then have arguments passed into it. ```js const add = (num1) => ( (num2) => num1 + num2 ) console.log( add(1)(2) ) ``` Upvotes: 1 <issue_comment>username_3: Nothing specifically ES6 about this-- you are looking at a function that itself returns another function. For example: ``` function addXMaker(x) { return function (num) { return num + x; } } addXMaker(10)(5); // returns 15 ``` This is perfectly valid in ES5, possible because in JavaScript functions are first-class citizens. [`connect`](https://github.com/reactjs/react-redux/blob/master/docs/api.md#connectmapstatetoprops-mapdispatchtoprops-mergeprops-options) accepts the config to map in state and dispatch, and returns a higher-order component, into which you can pass the component that you would like to receive the interface the HOC will inject. Upvotes: 3 [selected_answer]
2018/03/17
1,225
4,606
<issue_start>username_0: I have a Django Web-Application that uses celery in the background for periodic tasks. Right now I have three docker images * one for the django application * one for celery workers * one for the celery scheduler whose `Dockerfile`s all look like this: ``` FROM alpine:3.7 ENV PYTHONUNBUFFERED 1 RUN mkdir /code WORKDIR /code COPY Pipfile Pipfile.lock ./ RUN apk update && \ apk add python3 postgresql-libs jpeg-dev git && \ apk add --virtual .build-deps gcc python3-dev musl-dev postgresql-dev zlib-dev && \ pip3 install --no-cache-dir pipenv && \ pipenv install --system && \ apk --purge del .build-deps COPY . ./ # Run the image as a non-root user RUN adduser -D noroot USER noroot EXPOSE $PORT CMD ``` So they are all exactly the same except the last line. Would it make sense here to create some kind of base image that contains everything except CMD. And all three images use that as base and add only their respective CMD? Or won't that give me any advantages, because everything is cached anyway? Is a seperation like you see above reasonable? Two small bonus questions: * Sometimes the `apk update ..` layer is cached by docker. How does docker know that there are no updates here? * I often read that I should decrease layers as far a possible to reduce image size. But isn't that against the caching idea and will result in longer builds?<issue_comment>username_1: I would recommend looking at [docker-compose](https://docs.docker.com/compose/) to simplify management of multiple containers. Use a single Dockerfile like the one you posted above, then create a `docker-compose.yml` that might look something like this: ``` version: '3' services: # a django service serving an application on port 80 django: build: . command: python manage.py runserver ports: - 8000:80 # the celery worker worker: build: . command: celery worker # the celery scheduler scheduler: build: . command: celery beat ``` Of course, modify the commands here to be whatever you are using for your currently separate Dockerfiles. When you want to rebuild the image, `docker-compose build` will rebuild your container image from your Dockerfile for the first service, then reuse the built image for the other services (because they already exist in the cache). `docker-compose up` will spin up 3 instances of your container image, but overriding the run command each time. If you want to get more sophisticated, there are [plenty of resources](https://blog.syncano.io/configuring-running-django-celery-docker-containers-pt-1/) out there for the very common combination of django and celery. Upvotes: 2 <issue_comment>username_2: I will suggest to use one Dockerfile and just update your CMD during runtime. Litle bit modification will work for both local and Heroku as well. As far Heroku is concern they provide environment variable to start container with the environment variable. [heroku set-up-your-local-environment-variables](https://devcenter.heroku.com/articles/heroku-local#set-up-your-local-environment-variables) ``` FROM alpine:3.7 ENV PYTHONUNBUFFERED 1 ENV APPLICATION_TO_RUN=default_application RUN mkdir /code WORKDIR /code COPY Pipfile Pipfile.lock ./ RUN apk update && \ apk add python3 postgresql-libs jpeg-dev git && \ apk add --virtual .build-deps gcc python3-dev musl-dev postgresql-dev zlib-dev && \ pip3 install --no-cache-dir pipenv && \ pipenv install --system && \ apk --purge del .build-deps COPY . ./ # Run the image as a non-root user RUN adduser -D noroot USER noroot EXPOSE $PORT CMD $APPLICATION_TO_RUN ``` So When run you container pass your application name to run command. ``` docker run -it --name test -e APPLICATION_TO_RUN="celery beat" --rm test ``` Upvotes: 2 <issue_comment>username_3: Late, but for those still looking for a solution, multi-stage builds can also be used here in a way that lets you specify the uncommon logic for each scenario in each stage, and specify at build time which stage you'd like to build. For a very simple example, with the Dockerfile: ``` FROM ubuntu as build # Your common logic here FROM build as one CMD ["/bin/bash", "-c", "echo foo"] FROM build as two CMD ["/bin/bash", "-c", "echo bar"] ``` you can do `docker build --target one .` to produce an image that prints the output "foo", and `docker build --target two .` to produce an image that prints the output "bar." In scenarios where the difference between the final stages is small - e.g., just a different entrypoint - it's a convenient approach. Upvotes: 1
2018/03/17
273
956
<issue_start>username_0: I'm familiar with firebase and fire storage, but new to G-Cloud-SDK. I found this topic <https://groups.google.com/forum/#!msg/firebase-talk/oSPWMS7MSNA/RnvU6aqtFwAJ> and tried to copy the results, but get "IOError: [Errno 2] No such file or directory: u'cors-json-file.json'" from the terminal. I found a bit more information in the docs <https://cloud.google.com/storage/docs/configuring-cors#configure-cors-gsutil> and it describes the same process, and I get the same error. I'm finding it hard to do any troubleshooting with Google Cloud SDK since I have already setup functions in Firebase.<issue_comment>username_1: I did not move my local directory to my 'cors-json-file.json' ... Upvotes: 0 <issue_comment>username_2: ``` gsutil cors set cors.json gs://example-bucket ``` Example: If the cors.json file in the d: drive then the path will be like d:/folderName/cors\_filename. The gs url is your bucket url. Upvotes: 1
2018/03/17
991
3,759
<issue_start>username_0: I'm really confused about why I can not return the JSON result from `amazonMws.products.search()` and could use some help understanding what is going on. When I write it this way gives me `undefined`: ``` function listMatchingProducts(query) { const options = { Version: VERSION, Action: 'ListMatchingProducts', MarketplaceId: MARKET_PLACE_ID, SellerId: SELLER_ID, Query: query } amazonMws.products.search(options, (err, res) => { if(err){ throw(err) return } return res }) } ``` I also get `undefined` when using `amazonMws.products.search().then().catch()` as well. If I `return amazonMws.products.search()` I get a promise back instead of the result. Inside of the callbacks if I `console.log(res)` I get back the JSON result I'm expecting. So this led me to believe I need to use `async await` I think, but this results in `Promise { }`: ``` async function listMatchingProducts(query) { const options = { Version: VERSION, Action: 'ListMatchingProducts', MarketplaceId: MARKET_PLACE_ID, SellerId: SELLER_ID, Query: query } return await amazonMws.products.search(options) .then(res => { return res }) .catch(e => errorHandler(e)) } ``` I am totally lost, so if someone could explain to me what is going on, that would be greatly appreciated.<issue_comment>username_1: ``` function listMatchingProducts(query) { const options = { Version: VERSION, Action: 'ListMatchingProducts', MarketplaceId: MARKET_PLACE_ID, SellerId: SELLER_ID, Query: query } return amazonMws.products.search(options); //returns a promise } ``` Upvotes: 1 <issue_comment>username_2: The `amazonMws.products.search` function is asynchronous, meaning that it will give you a value later, and because of this, you can't get the value *now*. Instead, you'll have to say what you want to do *later* when you receive the value. This is what returning the promise does. The promise itself is the representation of this value that you'll receive later. ``` function listMatchingProducts(query) { const options = { Version: VERSION, Action: 'ListMatchingProducts', MarketplaceId: MARKET_PLACE_ID, SellerId: SELLER_ID, Query: query } return amazonMws.products.search(options) } ``` Then, when calling the function, attach a handler to the promise. ``` listMatchingProducts(someQuery) .then(result => {/* do something with result */}) .catch(error => {/* handle the error */}) ``` And, though you don't *need* to use async await here, it can make the code look a little nicer in some situations, as though it were synchronous. Here's what calling the above function would look like with async await: ``` async function getProducts() { try { const result = await listMatchingProducts(someQuery) // do something with result } catch (error) { // handle the error } } ``` And, as usual, always consult the docs for any detail you're confused about: * [Using promises](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Using_promises) * [`await` keyword](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/await) Upvotes: 2 <issue_comment>username_3: As others have pointed out, Promises just don’t work the way you think they do. See my answer to [function returning too early before filter and reduce finish](https://stackoverflow.com/questions/49322403/function-returning-too-early-before-filter-and-reduce-finish/49325840#49325840) for a (hopefully) clear explanation of the problem you face. Upvotes: 0
2018/03/17
1,827
6,602
<issue_start>username_0: With Woocommerce, I am trying to make a WP\_Query for product variations post type with a product category 'Apple'. ``` $args = array( 'product_cat' => 'Apple', 'post_type' => array('product', 'product_variation'), 'post_status' => 'publish', 'key' => '_visibility', 'value' => 'visible', 'posts_per_page' => 100, 'taxonomy' => 'pa_size', 'meta_value' => '39', 'meta_query' => array( array( 'key' => '_stock', 'value' => 0, 'compare' => '>' ) ) ); ``` But I can't get it work in this query. If I remove `'product_cat' => 'Apple'`, the query works. Why?<issue_comment>username_1: There is many mistakes in this `WP_Query` regarding Woocommerce products: * For **product category** and **product attribute** you should better use normally a `tax_query` instead. * For **Product visibility**, since Woocommerce 3, it's handled by `product_visibility` taxonomy for `'exclude-from-search'` and `'exclude-from-catalog'` terms. **Important notes about Product variations:** * Product categories *(or product tags)* are **not handled** by Product variations, but by the parent Variable product * Product attributes for variations are handled as **post meta data** with a `meta_key` prepended by "`attribute_`" and a `met_value` that is the **term slug**. * **Product visibility is not handled in product variations**, as they are not displayed in archives pages. * They are not displayed in archive pages (as mentioned before). So when using a `WP_Query`, you **can NOT query** at the same time "product" post type and "product\_variation" post type **as they are really different**. To make your query work for "product\_variation" post type, you need a little utility function that will get the parent variable product for a product category *(or any custom taxonomy as Product tags…)*: ``` // Utility function to get the parent variable product IDs for a any term of a taxonomy function get_variation_parent_ids_from_term( $term, $taxonomy, $type ){ global $wpdb; return $wpdb->get_col( " SELECT DISTINCT p.ID FROM {$wpdb->prefix}posts as p INNER JOIN {$wpdb->prefix}posts as p2 ON p2.post_parent = p.ID INNER JOIN {$wpdb->prefix}term_relationships as tr ON p.ID = tr.object_id INNER JOIN {$wpdb->prefix}term_taxonomy as tt ON tr.term_taxonomy_id = tt.term_taxonomy_id INNER JOIN {$wpdb->prefix}terms as t ON tt.term_id = t.term_id WHERE p.post_type = 'product' AND p.post_status = 'publish' AND p2.post_status = 'publish' AND tt.taxonomy = '$taxonomy' AND t.$type = '$term' " ); } ``` Code goes in function.php file of your active child theme (or active theme). Tested and works. **Necessary for the `WP_Query` below**… --- Here `WP_Query` code **for Product variations** *(only)* related to a specific product category and specific variation attribute values: ``` // Settings $cat_name = 'Apple'; // Product category name $attr_taxonomy = 'pa_size'; // Product attribute $attribute_term_slugs = array('39'); // <== Need to be term SLUGs $query = new WP_Query( array( 'post_type' => 'product_variation', 'post_status' => 'publish', 'posts_per_page' => 100, 'post_parent__in' => get_variation_parent_ids_from_term( $cat_name, 'product_cat', 'name' ), // Variations 'meta_query' => array( 'relation' => 'AND', array( 'key' => '_stock', 'value' => 0, 'compare' => '>' ), array( 'key' => 'attribute_'.$attr_taxonomy, // Product variation attribute 'value' => $attribute_term_slugs, // Term slugs only 'compare' => 'IN', ), ), ) ); // Display the queried products count echo 'Product count: ' . $query->post\_count . ''; // Displaying raw output for posts print\_pr($query->posts); ``` Tested and works. Upvotes: 5 [selected_answer]<issue_comment>username_2: There are some ways to do this, you can loop products and check for if the product is `variable` and if true, you can add another loop to also include the variations, but if you must do it with `WP_Query` for any reason, you can use `posts_join` filter, when you use `tax_query` it will add a `LEFT JOIN` sql statement that will compare post `ID` column in `wp_posts` table with term relationship `object_id` column in `wp_term_relationships` table, the statement looks like this: ``` LEFT JOIN wp_term_relationships ON (wp_posts.ID = wp_term_relationships.object_id) ``` All we need to do here is add a `OR` statement that will make the query look also for post parent ids and make the statement like below: (in Woocommerce product variations have a post parent which is the product itself) ``` LEFT JOIN wp_term_relationships ON (wp_posts.ID = wp_term_relationships.object_id OR wp_posts.post_parent = wp_term_relationships.object_id) ``` Long story, short, how do we do it? ----------------------------------- We will use a [posts\_join](https://developer.wordpress.org/reference/hooks/posts_join/) filter to do this, but remember that we need to remove the filter immediately after wp\_query is triggered (you can also make some unique variables and only change `join` statement if that variable is present ``` function dornaweb_include_parent_categories_join($joins, $wp_query) { /* * You can make a condition to make sure this code will only run on your intended query * Altough removing the filter after wp_query also do it */ if ($wp_query->query['my_include_parent_posts_cat_variable']) { global $wpdb; $find = "{$wpdb->prefix}posts.ID = {$wpdb->prefix}term_relationships.object_id"; $joins = str_replace($find, $find . " OR {$wpdb->prefix}posts.post_parent = {$wpdb->prefix}term_relationships.object_id", $joins); } return $joins; } add_filter( 'posts_join' , "dornaweb_include_parent_categories_join", 99, 2); $products_list = new \WP_Query([ 'post_type' => ['product', 'product_variation'], 'posts_per_page' => 15, 'paged' => !empty($_GET['page']) ? absint($_GET['page']) : 1, 'tax_query' => [ [ 'taxonomy' => 'product_cat', 'terms' => [$category], ] ], 'my_include_parent_posts_cat_variable' => true ]); remove_filter( 'posts_join' , "dornaweb_include_parent_categories_join", 99, 2); ``` Upvotes: 0
2018/03/17
1,022
3,522
<issue_start>username_0: I'm trying to create an installation script and display a progress bar during the install. ``` $localfolder= (Get-Location).path start-process -FilePath "$localfolder\Installer.exe" -ArgumentList "/silent /accepteula" -Wait ``` and as progress bar I want to add: ``` for($i = 0; $i -le 100; $i++) { Write-Progress -Activity "Installer" -PercentComplete $i -Status "Installing"; Sleep -Milliseconds 100; } ``` But I can't find the way to run the progress bar while the installer is running. If someone has an idea...<issue_comment>username_1: You could muck around with threading options, for your progress bar, but I don't recommend it. Instead, forego `-Wait` with `Start-Process`, and use `-PassThru` to return a [`[System.Diagnostics.Process]` object](https://msdn.microsoft.com/en-us/library/system.diagnostics.process%28v=vs.110%29.aspx). With that, you can check for the process having terminated yourself. This is important for two reasons, both related to the fact that your progress bar isn't actually tracking the progress of the installation: 1. You want to be able to abort the progress bar as soon as the process is finished. 2. You maybe want to reset the progress bar to 0 if it happens to take longer than 10,000 milliseconds. The `Process` object has [a boolean property called `.HasExited`](https://msdn.microsoft.com/en-us/library/system.diagnostics.process.hasexited(v=vs.110).aspx) which you can use for this purpose. With all that in mind, I'd do something like this: ``` $localfolder= (Get-Location).path $process = Start-Process -FilePath "$localfolder\Installer.exe" -ArgumentList "/silent /accepteula" -PassThru for($i = 0; $i -le 100; $i = ($i + 1) % 100) { Write-Progress -Activity "Installer" -PercentComplete $i -Status "Installing" Start-Sleep -Milliseconds 100 if ($process.HasExited) { Write-Progress -Activity "Installer" -Completed break } } ``` Summary of Changes ------------------ * `Start-Process` now uses `-PassThru` instead of `-Wait` and assigns the process object to the `$process` variable. * The `for` loop's iterator uses `$i = ($i + 1) % 100` instead of `$i++`, so that it keeps resetting to `0` when it reaches `100`. * An `if` block checks to see if the process has exited; if so, it ends the progress bar and then `break`s out of the loop. Slight caveat: the `for` loop is now an infinite loop that only breaks when the process exits. If the process is stuck, so is the loop. You could separately time the operation and handle a timeout if you wanted to. Upvotes: 4 [selected_answer]<issue_comment>username_2: You can make the Progressbar into a function and call it whenever you need to. Make sure you set this up: $ProgramName = "installer.exe" $Attributes = "/accepteula" ``` Function LoadingBar {For($I = 0; $I -le 100; $I = ($I + 1) % 100){Write-Progress -Activity "$ProgramName" -CurrentOperation "Please wait for instalation to complete... The bar will loop." -PercentComplete $I -Status "Installing";Start-Sleep -M 500;If ($LoadingProcess.HasExited) {Write-Progress -Activity "Installer" -Completed;Sleep 1;Break}}} ``` Then, you can call it like so, ``` $LoadingProcess = Start-Process -FilePath "F:\PortableApps\7zip\7zFM.exe" -ArgumentList "$Attributes" -PassThru;LoadingBar;Sleep 1;$LoadingProcess.WaitForExit() | Out-Null ``` This will display a progressbar at the top with the program name, action (Installing) and when the program is done, it will continue. Upvotes: 1
2018/03/17
724
1,902
<issue_start>username_0: I want to add a variable value to a dataframe based on the order of the observation in the data frame. ``` … Subject Latency(s) 1 A 25 2 A 24 3 A 25 4 B 22 5 B 24 6 B 23 ``` I want to add a third column called `Trial` and I want the values to be either T1, T2, or T3 based on the order of the observation and by Subject. So for example, Subject A would get T1 in row 1, T2 in row 2, and T3 in row 3. Then the same for subject B, and so on. Right now my approach is to use `group_by` in `dplyr` to group by Subject. But I'm not sure then how to specify the new variable using `mutate`.<issue_comment>username_1: This solution should work for any number of subjects. To illustrate, copy and paste this code into your console. ``` library(dplyr) d <- data.frame(subject = c("A","A","A","B","B","B","C","D","D"), latency = c(25,24,25,22,24,23,34,54,34)) # get counts of unique subjects n <- d %>% dplyr::count(subject) # create a list of sequences my_list <- lapply(n$n, seq) # paste a "T" to each of these sequences t_list <- lapply(my_list, function(x){paste0("T", x)}) # bind the collapsed list back onto your df d$trial <- do.call(c, t_list) ``` Upvotes: 1 <issue_comment>username_2: Use `mutate` w/ `row_number` & `group_by(Subject)` ```r library(dplyr) txt <- "ID Subject Latency(s) 1 A 25 2 A 24 3 A 25 4 B 22 5 B 24 6 B 23" dat <- read.table(text = txt, header = TRUE) dat <- dat %>% group_by(Subject) %>% mutate(Trial = paste0("T", row_number())) dat #> # A tibble: 6 x 4 #> # Groups: Subject [2] #> ID Subject Latency.s. Trial #> #> 1 1 A 25 T1 #> 2 2 A 24 T2 #> 3 3 A 25 T3 #> 4 4 B 22 T1 #> 5 5 B 24 T2 #> 6 6 B 23 T3 ``` Created on 2018-03-17 by the [reprex package](http://reprex.tidyverse.org) (v0.2.0). Upvotes: 3 [selected_answer]
2018/03/17
643
1,680
<issue_start>username_0: Hello i want to use "var" on multiple Id´s. ```js // Get modal element var modal = document.getElementById('modal1'); // Get open modal button var modalBtn = document.getElementById('Info1'); // Get close button var closeBtn = document.getElementById('closeBtn1'); ``` This is the code that works, but i want to do is that it is "modal1, modal2 ... and Info1, Info2... Can you help me with that?<issue_comment>username_1: This solution should work for any number of subjects. To illustrate, copy and paste this code into your console. ``` library(dplyr) d <- data.frame(subject = c("A","A","A","B","B","B","C","D","D"), latency = c(25,24,25,22,24,23,34,54,34)) # get counts of unique subjects n <- d %>% dplyr::count(subject) # create a list of sequences my_list <- lapply(n$n, seq) # paste a "T" to each of these sequences t_list <- lapply(my_list, function(x){paste0("T", x)}) # bind the collapsed list back onto your df d$trial <- do.call(c, t_list) ``` Upvotes: 1 <issue_comment>username_2: Use `mutate` w/ `row_number` & `group_by(Subject)` ```r library(dplyr) txt <- "ID Subject Latency(s) 1 A 25 2 A 24 3 A 25 4 B 22 5 B 24 6 B 23" dat <- read.table(text = txt, header = TRUE) dat <- dat %>% group_by(Subject) %>% mutate(Trial = paste0("T", row_number())) dat #> # A tibble: 6 x 4 #> # Groups: Subject [2] #> ID Subject Latency.s. Trial #> #> 1 1 A 25 T1 #> 2 2 A 24 T2 #> 3 3 A 25 T3 #> 4 4 B 22 T1 #> 5 5 B 24 T2 #> 6 6 B 23 T3 ``` Created on 2018-03-17 by the [reprex package](http://reprex.tidyverse.org) (v0.2.0). Upvotes: 3 [selected_answer]
2018/03/17
825
3,336
<issue_start>username_0: Is the following `add()` function referentially transparent? ``` const appState = { runningTotal: 0 } function add(x, y) { const total = x + y; appState.runningTotal += total; return total; } ``` I'm unsure of the answer due to a handful of definitions I've found for referential transparency. Here are some in the order of my confidence of their correctness. A function is referentially transparent if: 1. It can be replaced by its value and the behavior of the program remains the same 2. Given some input it will always produce the same output 3. It only depends on its input 4. It is stateless Given each of the definitions above I would think the answer is: 1. Maybe - I think it depends on how appState.runningTotal is used elsewhere in the program, but I'm not sure. 2. Yes 3. I'm not sure - It only depends on its input *to produce the output*, but it also uses `appState` in the body of the function 4. No Back to the specific question: is `add()` referentially transparent? Thanks in advance! P.S. - please let me know if I'm conflating multiple concepts, namely the concept of a `pure function`.<issue_comment>username_1: A pure function is referentially transparent. I call it "copypastability", aka you can copy paste each part of referentially transparent code around, and it'll still work as originally intended. All of the four criteria have to be fulfilled, although you can shrink them to the first statement. The others can all be inferred from that one. If a function can be reasonably replaced, that means you can replace it with a map/dictionary which has input as keys and outputs as values. So it'll always return the same thing on the same input. The same analogy works just fine with the "only depends on input" and "stateless". Upvotes: 0 <issue_comment>username_2: No, it isn't a referentially transparent function. Referential transparency refers specifically to the first criteria you have listed, namely that you can freely substitute the values on the left and right hand side of an expression without changing the behaviour of the program. `add(2,3)` returns the value `5` but you cannot replace instances of `add(2,3)` with `5` in your program because `add(2, 3)` also has the *side effect* of incrementing `runningTotal` by `5`. Substituting `add(2, 3)` for `5` would result in `runningTotal` not being incremented, changing the behaviour of your program. Upvotes: 3 [selected_answer]<issue_comment>username_3: I'd go with > > Maybe - It depends on how `appState.runningTotal` is used > > > as when it is not used, then it can be ignored. Obviously it is global state, but is it just for debugging or is it part of your actual application state? If the latter, then the function is not pure of course - it does change the state and replacing a call with the result value (or doing unnecessary calls whose result is dropped) would change the behaviour of your program. But if you do consider `appState.runningTotal` to not be part of the semantics of your program, and non of its functionality depends on it, you might as well ignore this side effect. We do this all the time, every real world computation affects the state of the computer it runs on, and we choose to ignore that when we consider the purity of our functions. Upvotes: 2
2018/03/17
774
2,967
<issue_start>username_0: What is the best way to get some Json data from a public api such as [British Air quality](https://api.carbonintensity.org.uk/intensity) i.e and create a new endpoint with some of the data under a different name like ``` { "data":[{ "from": "2018-03-17T22:00Z", "to": "2018-03-17T22:30Z", "intensity": { "forecast": 251, "actual": 250, "index": "moderate" } }] } ``` To this (I'm dropping some information I don't need) ``` { "Weather": [ {"newName": "forecastValue"}, {"newName2": "ActualValue"} ] } ``` I am facing this problem because I'm working with a third party application which only supports Json if it is formatted like that (2 Values and special name)... Thank you very much in advance for any help!<issue_comment>username_1: A pure function is referentially transparent. I call it "copypastability", aka you can copy paste each part of referentially transparent code around, and it'll still work as originally intended. All of the four criteria have to be fulfilled, although you can shrink them to the first statement. The others can all be inferred from that one. If a function can be reasonably replaced, that means you can replace it with a map/dictionary which has input as keys and outputs as values. So it'll always return the same thing on the same input. The same analogy works just fine with the "only depends on input" and "stateless". Upvotes: 0 <issue_comment>username_2: No, it isn't a referentially transparent function. Referential transparency refers specifically to the first criteria you have listed, namely that you can freely substitute the values on the left and right hand side of an expression without changing the behaviour of the program. `add(2,3)` returns the value `5` but you cannot replace instances of `add(2,3)` with `5` in your program because `add(2, 3)` also has the *side effect* of incrementing `runningTotal` by `5`. Substituting `add(2, 3)` for `5` would result in `runningTotal` not being incremented, changing the behaviour of your program. Upvotes: 3 [selected_answer]<issue_comment>username_3: I'd go with > > Maybe - It depends on how `appState.runningTotal` is used > > > as when it is not used, then it can be ignored. Obviously it is global state, but is it just for debugging or is it part of your actual application state? If the latter, then the function is not pure of course - it does change the state and replacing a call with the result value (or doing unnecessary calls whose result is dropped) would change the behaviour of your program. But if you do consider `appState.runningTotal` to not be part of the semantics of your program, and non of its functionality depends on it, you might as well ignore this side effect. We do this all the time, every real world computation affects the state of the computer it runs on, and we choose to ignore that when we consider the purity of our functions. Upvotes: 2
2018/03/17
609
2,094
<issue_start>username_0: I've just recently started trying to learn C through the tutorials on Wikibooks. I've read the beginning C pages listed [here](https://en.wikibooks.org/wiki/C_Programming) and am attempting to do the exercises. I'm having a problem with the second question on loops: wherein I'm trying to make a function to output a triangle made up of lines of \* characters, where the height is *2n-1* if the width is *n*. My first thought was to make a nested loop, the outer of which would create a variable for the line number, and compare it to the max height. The inner loop would create a variable that would essentially serve as the index of the \* character within that particular line. My problem is I don't know how to deal with making the lines after the max width decrease in size. Can anyone point me in the right direction? Here is my code: ``` #include void triangle(int); int main() { int width; printf("%s", "Please enter a width for your triangle: "); scanf("%d", & width); triangle(width); return 0; } void triangle(int width) { for (int line = 1; line <= (2 \* width) - 1; line++) { for (int i = 0; i < line && i < width; i++) { printf("%s", "\*"); } printf("%s", "\n"); } } ```<issue_comment>username_1: Try this: ``` void triangle(int width) { int line, i, rev = 0; for (line = 1; line < width; ++line) { for (i = 0; i < line && i < width; i++) { printf("*"); } printf("\n"); } for (; line; --line) { for (i = 0; i < line && i < width; i++) { printf("*"); } printf("\n"); } } ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: if you want to do it in one pair of nested for loops try this: ``` void triangle(int width) { int i, j, height, tmp; height = 2 * width - 1; tmp = 1; for (i = 0; i < height; i++) { for (j = 0; j < tmp; j++) { putchar('*'); } putchar('\n'); if (i < height / 2) { tmp++; } else { tmp--; } } } ``` Upvotes: 0
2018/03/17
2,106
6,486
<issue_start>username_0: I need to get the following output: ``` * ** *** **** ***** ****** ******* ******** ********* ********** ``` So its 10 rows,and my stars will start at 1 and go to 10. Currently I am getting: ``` ********** *********** ************ ************* ************** *************** **************** ***************** ****************** ******************* ******************** ``` My code: ```x86 section .data char db ' ' trianglesize db 0; ;stars per line trianglerows db 10; section .text global _start _start mov rax, [trianglerows] ;rows outer_loop: mov rbx, [trianglerows] inner_loop: call star dec bx cmp bx,0 jg inner_loop call newline call down_triangle dec ax cmp ax, 0 jne outer_loop call newline call exit exit: mov eax,1 ;sys_exit mov ebx,0 ;return 0 int 80h; ret newline: mov [char],byte 10 push rax; push rbx; mov eax,4; ;sys_write mov ebx,1; ;stdout mov ecx, char; mov edx,1; ;size of new line int 80h pop rbx; pop rax; ret star: mov [char], byte '*'; push rax; push rbx; mov eax,4; ;sys_write mov ebx,1; ;stdout mov ecx, char; mov edx,1; int 80h; pop rbx; pop rax; ret down_triangle: push rax; push rbx; mov rax, [trianglerows] inc ax mov [trianglerows],rax pop rbx pop rax ret ``` I tried and tried and tried but I couldn't get what I needed to get. I seem to be unable to find a way to separate the rows from the lines of stars, because of all those `push` and `pop`. Honestly, I do not understand these much. I've been told that they are needed to execute the loops, but I am not sure why, for example, in the function `star` I would need to call the outer loop. I couldn't find any combination of `push` and `pop` that worked. I am constantly getting either many stars or one star per line or just one star. I am literally puzzled at which bits I'm changing and keeping the same. I was able to get the required output but one that never ended increasing. I was able to get output starting from 10 stars and going down to one, but never what I wanted. What am I doing wrong? How do I do this question?<issue_comment>username_1: Your first row has 10 stars because you are using `[trianglerows]` in your inner loop. I'm sure you intended to use `[trianglesize]` (which currently you aren't using anywhere). Then in `down_triangle`, you'll want to increment, again, `[trianglesize]` rather than `[trianglerows]`. Finally, you probably want `[trianglesize]` to start with 1 rather than 0, for 1 star in the first row. Also, be sure to correct your memory usage as described by <NAME> in the comments below, otherwise your variables are being corrupted because they share the same memory. Upvotes: 3 <issue_comment>username_2: I solved the problem this way, it's 32-bit: ```x86 bits 32 global _start section .data rows dw 10 section .text _start: movzx ebx, word [rows] ; ebx holds number of rows, used in loop ; here we count how many symbols we have lea eax, [ebx+3] imul eax,ebx shr eax,1 ; shr is used to divide by two ; now eax holds number of all symbols mov edx, eax ; now edx holds number of all symbols, used in print ;we prepare stack to fill data mov ecx,esp sub esp,edx ;we fill stack backwards next_line: dec ecx mov [ecx],byte 10 mov eax,ebx next_star: dec ecx mov [ecx],byte '*' dec eax jg next_star dec ebx jg next_line ;print ; edx has number of chars; ecx is pointer on the string mov eax,4; ;sys_write inc ebx; ;1 - stdout, at the end of the loop we have ebx=0 int 80h; ;exit mov eax,1 ;1 - sys_exit xor ebx,ebx ;0 - return 0 int 80h; ret ``` How did I do it? First of all, I count number of symbols what we have to print. I'll print it all at once. It's the sum of a finite [arithmetic progression](https://en.wikipedia.org/wiki/Arithmetic_progression)(arithmetic series). [![TriangleImage](https://upload.wikimedia.org/wikipedia/commons/thumb/c/c7/Arithmetic_progression.svg/360px-Arithmetic_progression.svg.png)](https://en.wikipedia.org/wiki/Arithmetic_progression) In our case we have ![a1=2andd=1](https://latex.codecogs.com/gif.latex?a_1%3D2%2C%20d%3D1) ![Sn](https://latex.codecogs.com/gif.latex?S_n%3D%5Cfrac%7Bn%28n+3%29%7D%7B2%7D) We see 3 operations `+`, `*` and `/`. We can optimise only the division by 2, doing right shift: ```x86 lea eax, [ebx+3] ; n + 3 imul eax,ebx ; n * (n + 3) shr eax,1 ; n * (n+3) / 2 ``` Our string will be on the stack, let's prepare it to have enough memory: ```x86 mov ecx,esp sub esp,edx ``` And then, we fill our stack by stars and `\n`s ```x86 next_line: dec ecx mov [ecx],byte 10 mov eax,ebx next_star: dec ecx mov [ecx],byte '*' dec eax jg next_star dec ebx jg next_line ``` I fill it backwards. What does it mean? I fill the string by symbols from the end to the beginning. Why do I do that? Just because I want to use less registers as it possible. At the end of the loop `ecx` contains a pointer on the string what we want to print. If I filled forwards, `ecx` contains a pointer on `esp` before "stack prepairing", and I can't use the register as a pointer on string in `print` function. Also I have to use another register to decrement or use `cmp` which is slower than `dec`. That's all, print and end. --- **Another case** ``` global _start section .data rows dw 10 section .text _start: ;it defines how many symbols we have to print movzx ebx, byte[rows] ; ebx holds number of rows lea eax,[ebx+3] imul eax,ebx shr eax,1 ; now eax holds number of all symbols mov edx,eax ; now edx holds number of all symbols, used in print ;prepare pointer mov ecx,esp sub ecx,eax ; ecx points on the beginning of the string, used in print ;fill the string by stars mov eax,edx shr eax,2 mov ebp, dword '****' next_star: mov [ecx+4*eax],ebp dec eax jge next_star ;fill the string by '\n' mov edi,esp dec edi mov eax,ebx; in the eax is number of rows inc eax next_n: mov [edi],byte 0xa sub edi,eax dec eax jg next_n ;print ;mov ecx,esp mov eax,4; ;sys_write mov ebx,1; ;1 - stdout int 80h; ;exit mov eax,1 ;1 - sys_exit xor ebx,ebx ;0 - return 0 int 80h; ret ``` Here, at the beginning we fill the stack by stars and only after that we fill it by `\n`s <https://github.com/tigertv/stackoverflow-answers> Upvotes: 2
2018/03/17
703
1,939
<issue_start>username_0: I have a data frame input that looks like: > > > ``` > col1 col2 col3 > 0 3 1 NaN > 1 NaN 7 8 > > ``` > > How to collapse *all* rows while concatenating the data in the rows with `', '`? The desired data frame output: > > > ``` > col1 col2 col3 > 0 3 1, 7 8 > > ``` > > --- Sample input code: ``` import pandas as pd import numpy as np d = {'col1': ["3", np.nan], 'col2': ["1", "7"], 'col3': [np.nan, "8"]} df = pd.DataFrame(data=d) ```<issue_comment>username_1: `agg` + `dropna` + `str.join` comes to mind. ``` df.agg(lambda x: ', '.join(x.dropna())).to_frame().T col1 col2 col3 0 3 1, 7 8 ``` There are other solutions, my peers will find them for you :) Upvotes: 4 [selected_answer]<issue_comment>username_2: One way to get what you want would be to create a new dataframe with the same columns as your old dataframe, and populate the first index with your desired data. In your case, your desired data would be a list of each column, joined by `', '`, and with your `NaN` values removed: ``` new_df = pd.DataFrame(columns=df.columns) for col in df.columns: new_df.loc[0, col] = ', '.join(df[col].dropna().tolist()) >>> new_df col1 col2 col3 0 3 1, 7 8 ``` Upvotes: 2 <issue_comment>username_3: ``` pd.DataFrame( [[ ', '.join(map(str, map(int, filter(pd.notna, c)))) for c in zip(*df.values) ]], columns=df.columns ) col1 col2 col3 0 3 1, 7 8 ``` Upvotes: 3 <issue_comment>username_4: One more option: ``` In [156]: pd.DataFrame([[df[c].dropna().astype(int).astype(str).str.cat(sep=', ') for c in df]], columns=df.columns) Out[156]: col1 col2 col3 0 3 1, 7 8 ``` Upvotes: 2 <issue_comment>username_5: With stack ``` df.stack().groupby(level=1).apply(','.join).to_frame().T Out[163]: col1 col2 col3 0 3 1,7 8 ``` Upvotes: 2
2018/03/17
759
2,789
<issue_start>username_0: I have a `ClassA` object that sets method references inside of `ClassB1` and `ClassB2` objects. `ClassB1` and `ClassB2` objects will later use this method reference while running their methods. But, sometimes we do not set the method reference: ``` public class ClassA { public ClassA() { ClassB1 objB1 = new ClassB1(); ClassB2 objB2 = new ClassB2(); objB1.setFuncitonA(this::functionA); objB2.setFuncitonA(this::functionA); objB1.functionB(); objB2.functionB(); } public void functionA(Integer x) { x *= 2; } } public class ClassB1 { private Integer intObjB = new Integer(2); private Consumer functionA = null; public void functionB() { if(functionA != null) { functionA.accept(intObjB); } } public void setFuncitonA(Consumer functionA) { this.functionA = functionA; } } public class ClassB2 { private Integer intObjB = new Integer(2); private Consumer functionA = this::defaultFunctionA; public void functionB() { functionA.accept(intObjB); } public void setFuncitonA(Consumer functionA) { this.functionA = functionA; } public void defaultFunctionA(Integer intObj) { return; } } ``` Should it be like in `ClassB1` or like in `ClassB2`, or, does it matter at all? What is the standard pattern of writing such code?<issue_comment>username_1: Eliminating `if` statements is always preferred. Also, eliminating `null` variables is always better. Therefore, the `ClassB2` approach is better by far. Upvotes: 0 <issue_comment>username_2: There is nomenclature for this sort of implementation decision: **lazily instantiating** your field, or **eagerly instantiating** your field. `ClassB1` lazily instantiates the `functionA` consumer. This tells a maintainer (including yourself) that this consumer isn't always necessary for every new instance, and having it `null` in certain contexts is safe. It *does* mean that you have to look over your shoulder when you're using it though, as in the case of the `null` checks. `ClassB2` eagerly instantiates the `functionA` consumer. This tells a maintainer (including yourself) that this consumer is ***required*** at instantiation time. This means you avoid the silly `null` check if it's truly something you know at instantiation time (and in this case, it *is* something you either know or can get). The standard pattern then becomes: * If you're comfortable checking that the field (or variable) is `null` *before* use, then lazily instantiate the field. * If you *must* be sure that the field (or variable) is not `null` *before* use, then eagerly instantiate the field. There's no hard and fast rule to use or prefer one over the other. This will heavily depend on your use case. Upvotes: 2 [selected_answer]
2018/03/17
487
1,956
<issue_start>username_0: The errors I am facing is " setCredentials - can't resolve this method initializeApp - can't resolve this method ``` FileInputStream serviceAccount = new FileInputStream("path/to/serviceAccountKey.json"); FirebaseOptions options = new FirebaseOptions.Builder() .setCredentials(GoogleCredentials.fromStream(serviceAccount)) .setDatabaseUrl("https://qrmoney-1aec0.firebaseio.com") .build(); FirebaseApp.initializeApp(options); ``` Note - I am importing everything.<issue_comment>username_1: Eliminating `if` statements is always preferred. Also, eliminating `null` variables is always better. Therefore, the `ClassB2` approach is better by far. Upvotes: 0 <issue_comment>username_2: There is nomenclature for this sort of implementation decision: **lazily instantiating** your field, or **eagerly instantiating** your field. `ClassB1` lazily instantiates the `functionA` consumer. This tells a maintainer (including yourself) that this consumer isn't always necessary for every new instance, and having it `null` in certain contexts is safe. It *does* mean that you have to look over your shoulder when you're using it though, as in the case of the `null` checks. `ClassB2` eagerly instantiates the `functionA` consumer. This tells a maintainer (including yourself) that this consumer is ***required*** at instantiation time. This means you avoid the silly `null` check if it's truly something you know at instantiation time (and in this case, it *is* something you either know or can get). The standard pattern then becomes: * If you're comfortable checking that the field (or variable) is `null` *before* use, then lazily instantiate the field. * If you *must* be sure that the field (or variable) is not `null` *before* use, then eagerly instantiate the field. There's no hard and fast rule to use or prefer one over the other. This will heavily depend on your use case. Upvotes: 2 [selected_answer]
2018/03/17
2,060
6,284
<issue_start>username_0: I am trying to rename all files and subdirectories in a given directory. I create vectors of the old and new names, and map a renaming function over the two vectors. The vectors of file paths are created ok but the mapping `(map #(re-name (as-file %1) (as-file %2)) flist new-flist)))` does not seem to call the renaming function at all (I've put a print call in it to test). After much tweaking and searching I am still perplexed as to what I'm missing. Also, I would like to know how I could obviate resorting to forums by debugging it by stepping through the code. Code: ``` ;;;; Use: Replaces whitespace in names of all directories and files in the given directory with the given separator. (ns file-renamer.core (:gen-class) (:require [clojure.string :as str])) (use '[clojure.java.io]) (use '[clojure.pprint]) ; Constants for testing -> params live (def direc "/home/john/test/") (def separator "-") ; ==================================== (defn traverse-dir "Return a seq of pathnames of directories and files in the given directoryt" [dir-path] (map #(.getAbsolutePath %) (file-seq (file dir-path)))) (defn replace-spaces "Return a copy of string s1 with all sequences of spaces replaced with string s2" [s1 s2] (str/replace s1 #"\s+" s2)) (defn re-name "Rename a file" [old-file new-file] (pprint (str "Renaming: " old-file " ==>> " new-file)) ; put here for debugging (.renameTo old-file new-file)) (defn -main "Map a fn to rename a file over all files and dirs in the given directory" [& args] (let [flist (vec (traverse-dir direc)) new-flist (vec (map #(replace-spaces % separator) flist))] (pprint flist) (pprint new-flist) (map #(re-name (as-file %1) (as-file %2)) flist new-flist))) ``` Output: ``` 17 Mar 20:53 /file-renamer ⋄ lein run ["/home/john/test" "/home/john/test/test 108" "/home/john/test/test 108/ baaaa rrrr" "/home/john/test/test 108/ baaaa rrrr/Open Document Text .... odt" "/home/john/test/test 108/ baaaa rrrr/ba z z er ." "/home/john/test/test 108/ baaaa rrrr/ba z z er ./freed.frm" "/home/john/test/test 108/ baaaa rrrr/ba z z er ./New Folder" "/home/john/test/test 108/ baaaa rrrr/ba z z er ./New Folder/Plain Text.txt" "/home/john/test/test 108/ baaaa rrrr/ba z z er ./fr ed.txt" "/home/john/test/test 108/s p a c e s------S P A C E S " "/home/john/test/fox" "/home/john/test/foo" "/home/john/test/fog"] ["/home/john/test" "/home/john/test/test-108" "/home/john/test/test-108/-baaaa-rrrr" "/home/john/test/test-108/-baaaa-rrrr/Open-Document-Text-....-odt" "/home/john/test/test-108/-baaaa-rrrr/ba-z-z-er-." "/home/john/test/test-108/-baaaa-rrrr/ba-z-z-er-./freed.frm" "/home/john/test/test-108/-baaaa-rrrr/ba-z-z-er-./New-Folder" "/home/john/test/test-108/-baaaa-rrrr/ba-z-z-er-./New-Folder/Plain-Text.txt" "/home/john/test/test-108/-baaaa-rrrr/ba-z-z-er-./fr-ed.txt" "/home/john/test/test-108/s-p-a-c-e-s------S-P-A-C-E-S-" "/home/john/test/fox" "/home/john/test/foo" "/home/john/test/fog"] 17 Mar 20:53 /file-renamer ⋄ ```<issue_comment>username_1: Check the docs on [doall](https://clojuredocs.org/clojure.core/doall). Here, renaming is a side-effect. The map returns a lazy sequence. You need to force mapping over the whole collections in order for the renaming to happen. Upvotes: 2 <issue_comment>username_2: You could force the evaluation of the lazy sequence that `map` returns using `doall` as others have suggested (or just use the strict `mapv`), but `map` isn't intended to carry out side effects. Adding `doall` allows it to work, but it's just covering up a smell. I'd use `doseq` for this since you're only iterating the sequences to carry out side effects. You don't actually care what the `map` evaluates to. Unfortunately, this solution requires explicitly zipping the two collections together before they can be given to `doseq`, which bulk it up a bit: ``` (let [zipped (map vector flist new-flist)] ; Zip the sequences together (doseq [[f1 f2] zipped] (re-name (as-file f1) (as-file f2))) ``` It's not as tense, but the use of `doseq` makes it much clearer what the intent is. Upvotes: 2 <issue_comment>username_3: There are a couple of problems here. They are not directly your fault, but it's more the REPL that is playing a trick on you and you not fully understanding (probably) the core concepts of Clojure and lazy operations/sequences. The map function returns a lazy sequence, that need to be realized at some point in time, which you are not doing. On top of that, the re-name function is not a side-effect free function (pure function). The REPL is also playing a trick on you: If you call (-main) on the REPL, it will automatically realize those sequences and can cause a lot of confusion for someone new to Clojure. The most straightforward solution would be to just use the [doall](https://clojuredocs.org/clojure.core/doall) function. ``` (doall (map #(re-name (as-file %1) (as-file %2)) flist new-flist)) ``` But this is the quick and dirty way and I am going to quote Stuart Sierra here: > > You might get the advice that you can “force” a lazy sequence to be evaluated with doall or dorun. There are also snippets floating around that purport to “unchunk” a sequence. > > > In my opinion, the presence of doall, dorun, or even “unchunk” is almost always a sign that something never should have been a lazy sequence in the first place. > > > A better solution in this case would be to use the [doseq](https://clojuredocs.org/clojure.core/doseq) function and write something like this: ``` (defn -main "Map a fn to rename a file over all files and dirs in the given directory" [& args] (doseq [file-string (traverse-dir direc)] (let [input-file (as-file file-string) output-file (as-file (replace-spaces file-string separator))] (re-name input-file output-file)))) ``` This could also be written way shorter. A good reading that will help a lot in general is the blog post from Stuart Sierra: [Clojure Don’ts: Lazy Effects](https://stuartsierra.com/2015/08/25/clojure-donts-lazy-effects). Upvotes: 4 [selected_answer]
2018/03/17
1,154
4,258
<issue_start>username_0: How can I create an authorization schema for pages? For example: I have * pages like `page1`, `page2`, `page3` and `page4` with * users as `user1`, `user2`, `user3` and `user4`. When I login * user1 should get only page1 and page4 * user2 --> page2 and page3 * user3 --> page1 and page3 * user4 --> page2 and page4 I.e in a `priv` table the page numbers and the users are stored. The boolean return value function is working for components, but for the page it shows an error. How can I write an authorization schema for the above roles?<issue_comment>username_1: 1 - Go to Shared Components 2 - Click on Authorization Schemes 3 - Create a new scheme of the type "PL/SQL function returning boolean" 4 - Your function should return "false" to denied access In this pl/sql code you have access to :APP\_USER variable and :APP\_PAGE\_ID (page number). If you have a function that receives the user and the page and return a boolean checking if he has or no access, so just do: ``` BEGIN RETURN MYFUNCTION(:APP_USER, :APP_PAGE_ID); END; ``` 5 - Go to "Edit Application Properties" > "Security" and choose your authorization scheme. 6 - You don't need to set for every page the authorization scheme, just do the step 5. [![enter image description here](https://i.stack.imgur.com/q71kz.png)](https://i.stack.imgur.com/q71kz.png) --- I don't know how is your table. But supposing that it's have two columns like ``` USER PAGE_NUMBER user1 1 user1 4 user2 2 user2 3 user3 1 user3 3 user4 2 user4 4 ``` So your function look like this: ``` CREATE OR REPLACE FUNCTION "MYFUNCTION" (p_user IN VARCHAR2, p_page_number IN NUMBER) RETURN BOOLEAN AS v_count NUMBER := 0; BEGIN SELECT count(*) INTO v_count FROM mytable WHERE user = p_user AND page_number = p_page_number; IF v_count = 0 THEN RETURN false; END IF; RETURN true; END; ``` Upvotes: 2 <issue_comment>username_2: The above authorization schema with return boolean function works for components if we save the component id along with the page id and check for components But for page authorization its not working and showing error. tested with the above table structure and function [enter image description here](https://i.stack.imgur.com/mrfkw.jpg) Upvotes: -1 <issue_comment>username_3: You should consider a custom Authorization Scheme&Authentication Scheme.. 1- Create a table for the users to store each user name and password and put a column for 'user\_type', in this column you will have the types of users you have for example: 'dev' for developer and 'adm' for admin ans so on ... 2- Create an plsql function that returns a Boolean (true/false) based on a query that compares the user name and password you get from the user with the ones stored at your users table. This function will also set 'Session Variables' to pass the user\_name and user\_type after a successful login, something like: ``` apex_util.set_session_state('SESSION_U_TYPE',temp_type); apex_util.set_session_state('SESSION_USER_NAME',in_username); ``` 3- At the shared components of your application create 'Application Items' by the same exact names you use in your function (here SESSION\_U\_TYPE, SESSION\_USER\_NAME) 4- Edit your 'Log-In' page and remove or comment out the default code and use your function and pass the user\_name and password via binding variables 5-Now go to shared components again and create a custom Authorization Scheme, give it a name and choose 'SQL exist' as type and write a sql query to check the user\_type of the current user, something like: ``` SELECT * FROM my_users WHERE user_name = :SESSION_USER_NAME AND user_type = 'dev'; ``` Repeat this step to create as many 'access levels' as you need 6- Finally, go to each page and under security section choose the scheme or the user level you want to allow to access this page -- Notes: * + This is fast solution (but working fine) and you can add many improvements to it for example you should create a procedure to handle the log-in and pass the parameters to the authentication function * + YOU HAVE TO HASH THE PASSWORD AS YOU SHOULD NEVER STORE PASSWORD IN PLAIN TEXT! I hope this was useful Upvotes: 0
2018/03/17
1,112
3,895
<issue_start>username_0: I have a script that pulls information from an API and is supposed to write it to a CSV file which I have in the same directory. For some reason, all the code seems to execute fine but the CSV never actually contains any data. Am I missing something crucial? I've checked the documentation for the CSV module but can't find anything. Here is my code: ``` import time, json, requests, csv with open('data.csv', 'w') as csvfile: fieldnames = ['time','last','vwap','high','low','open','vol'] writer = csv.DictWriter(csvfile, fieldnames=fieldnames) writer.writeheader() def btstampTime(): bitStampTick = requests.get('https://www.bitstamp.net/api/ticker/') return bitStampTick.json()['timestamp'] def btstampLast(): bitStampTick = requests.get('https://www.bitstamp.net/api/ticker/') return bitStampTick.json()['last'] def btstampVWAP(): bitStampTick = requests.get('https://www.bitstamp.net/api/ticker/') return bitStampTick.json()['vwap'] def btstampHigh(): bitStampTick = requests.get('https://www.bitstamp.net/api/ticker/') return bitStampTick.json()['high'] def btstampLow(): bitStampTick = requests.get('https://www.bitstamp.net/api/ticker/') return bitStampTick.json()['low'] def btstampOpen(): bitStampTick = requests.get('https://www.bitstamp.net/api/ticker/') return bitStampTick.json()['open'] def btstampVol(): bitStampTick = requests.get('https://www.bitstamp.net/api/ticker/') return bitStampTick.json()['volume'] while True: timestamp = btstampTime() last = float(btstampLast()) vwap = float(btstampVWAP()) high = float(btstampHigh()) low = float(btstampLow()) open = float(btstampOpen()) vol = float(btstampVol()) writer.writerow({'time': timestamp, 'last': last, 'vwap': vwap, 'high': high, 'low': low, 'open': open, 'vol': vol}) print('Time: ', timestamp) print('Last =', last) print('VWAP =', vwap) print('High =', high) print('Low =', low) print('Open =', open) print('Volume =', vol) print('') time.sleep(60) ```<issue_comment>username_1: Here is working (and optimized :)) solution. Bear in mind that info will be written to file, but it is possible that OS won't update file size visually in real time. ``` import os import csv import time import requests csv_file = open('data.csv', 'w') field_names = ['time', 'last', 'vwap', 'high', 'low', 'open', 'vol'] writer = csv.DictWriter(csv_file, fieldnames=field_names) writer.writeheader() while True: info = requests.get('https://www.bitstamp.net/api/ticker/').json() writer.writerow({'time': info['timestamp'], 'last': info['last'], 'vwap': info['vwap'], 'high': info['high'], 'low' : info['low'], 'open': info['open'], 'vol' : info['volume']}) csv_file.flush() os.fsync(csv_file.fileno()) time.sleep(60) # this line is optional, you can delete it. print('Info appended.') ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Using pandas: ``` import pandas as pd import requests d = requests.get('https://www.bitstamp.net/api/ticker/').json() cols = ['timestamp', 'last', 'vwap', 'high', 'low', 'open', 'volume'] df = pd.DataFrame([d.get(i,'') for i in cols], cols).T df.to_csv('output.csv', index=False) ``` Creates ouput.csv with: ``` timestamp,last,vwap,high,low,open,volume 1521332251,7831.18,8069.97,8356.40000000,7730.23000000,7860.83,11882.59781852 ``` Upvotes: 0
2018/03/17
549
1,883
<issue_start>username_0: Hello me need get variable from function me return in answer undefined ```js function etr() { var img = new Image(); img.onload = paramsImg; img.src = picFile.result; function paramsImg() { return img.height; }; }; var vvvr = etr(); alert(vvvr); ```<issue_comment>username_1: Your function `etr` doesn't return anything. I see that you are trying to return from an event handler for onload, but that only returns from the `paramsImg` function and not from `etr` (which has already returned before the image loads). You should wither make `etr` accept a callback function or return a Promise so that you can alert the images height ***after*** the image has loaded. Here is an example with a Promise: ```js function etr() { return new Promise( ( resolve, reject ) => { var img = new Image(); img.onload = _ => resolve(img.height); img.onerror = reject; img.src = picFile.result; } ); }; var picFile = { result: 'https://dummyimage.com/600x1234/000/fff' }; (async function ( ) { var vvvr = await etr(); alert(vvvr); })( ); ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: ``` function etr() { var img = new Image(); img.onload = paramsImg; img.src = picFile.result; function paramsImg() { return img.height; }; }; ``` 1. In your function, you mentioned paramsImg before its even loaded, so its not visible to img.onload. 2. paramsImg is declared simply as function, its not have scope outside the object. You need to use this keyword or mention fn with prototype. ```js function etr(picFile){ this.paramsImg = function(){ return img.height; }; var img = new Image(); img.onload = this.paramsImg; img.src = picFile.result; } picfile = { result: 10 } var vvvr = new etr(picfile); alert(vvvr.paramsImg()); ``` Upvotes: 2
2018/03/17
844
2,961
<issue_start>username_0: I used below scripts to log into the following website with a username and password. It used to work fine. However, I think recently Etsy set the username ( and maybe password too) to hidden type. So username cannot be sent to the Email address field. Here are the scripts. Anyone knows how to fix this particular case? The error message is: > > ElementNotVisibleException: Message: element not visible > > > ``` from selenium import webdriver from selenium.webdriver.support.ui import WebDriverWait chrome_options = webdriver.ChromeOptions() chrome_options.add_experimental_option('prefs', profile) chrome_options.add_argument('--kiosk-printing') self.driver = webdriver.Chrome('C:/Python27/chromedriver.exe',chrome_options=chrome_options) self.driver.get("https://www.etsy.com/signin") usernameFieldID = "username" passFieldID = "<PASSWORD>" signButtonXpath = "//input[@value='Sign in']" usernameFieldElement = WebDriverWait(driver, 5).until(lambda driver:driver.find_element_by_name(usernameFieldID)) passFieldElement = WebDriverWait(driver, 5).until(lambda driver:driver.find_element_by_name(passFieldID)) signButtonElement = WebDriverWait(driver, 5).until(lambda driver:driver.find_element_by_xpath(signButtonXpath)) usernameFieldElement.send_keys("<EMAIL>") passFieldElement.send_keys("<PASSWORD>") signButtonElement.click() ```<issue_comment>username_1: Your function `etr` doesn't return anything. I see that you are trying to return from an event handler for onload, but that only returns from the `paramsImg` function and not from `etr` (which has already returned before the image loads). You should wither make `etr` accept a callback function or return a Promise so that you can alert the images height ***after*** the image has loaded. Here is an example with a Promise: ```js function etr() { return new Promise( ( resolve, reject ) => { var img = new Image(); img.onload = _ => resolve(img.height); img.onerror = reject; img.src = picFile.result; } ); }; var picFile = { result: 'https://dummyimage.com/600x1234/000/fff' }; (async function ( ) { var vvvr = await etr(); alert(vvvr); })( ); ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: ``` function etr() { var img = new Image(); img.onload = paramsImg; img.src = picFile.result; function paramsImg() { return img.height; }; }; ``` 1. In your function, you mentioned paramsImg before its even loaded, so its not visible to img.onload. 2. paramsImg is declared simply as function, its not have scope outside the object. You need to use this keyword or mention fn with prototype. ```js function etr(picFile){ this.paramsImg = function(){ return img.height; }; var img = new Image(); img.onload = this.paramsImg; img.src = picFile.result; } picfile = { result: 10 } var vvvr = new etr(picfile); alert(vvvr.paramsImg()); ``` Upvotes: 2
2018/03/17
842
2,767
<issue_start>username_0: When I run the following from a Windows computer (via [Cygwin](https://cygwin.com)) I get an error: ``` ssh <EMAIL> 'cd ./bin && ./script.rb' ./script.rb:2:in `gsub': invalid byte sequence in US-ASCII (ArgumentError) ``` Running the same code (the entire `ssh` line above) from Terminal on Mac is successful. How can this be? This is the two-line script named `script.rb` above: ``` #!/usr/bin/env ruby File.open('/home/bob/input.txt').read.gsub(/A/, 'B') ``` More info... I **emptied** the input file `input.txt` above, and still its size as reported by `ls -l input.txt` and `stat input.txt` is `3`. Running `file input.txt` reports `UTF-8 Unicode text, with no line terminators`; if I place just one character into `input.txt`, it's size becomes `5` and `file` reports it as `UTF-8 Unicode (with BOM) text`. The Ruby version on the Linux server is `2.2.7p470` and RubyGems (`gem --version`) is `2.6.11`. The file `input.txt` is generated by a 3rd party program (it is a data export). Running `File.open('/home/bob/input.txt').read.encoding` gives `US-ASCII` when executed from Windows/Cygwin/ssh and `UTF-8` from macOS/Terminal/ssh.<issue_comment>username_1: Your function `etr` doesn't return anything. I see that you are trying to return from an event handler for onload, but that only returns from the `paramsImg` function and not from `etr` (which has already returned before the image loads). You should wither make `etr` accept a callback function or return a Promise so that you can alert the images height ***after*** the image has loaded. Here is an example with a Promise: ```js function etr() { return new Promise( ( resolve, reject ) => { var img = new Image(); img.onload = _ => resolve(img.height); img.onerror = reject; img.src = picFile.result; } ); }; var picFile = { result: 'https://dummyimage.com/600x1234/000/fff' }; (async function ( ) { var vvvr = await etr(); alert(vvvr); })( ); ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: ``` function etr() { var img = new Image(); img.onload = paramsImg; img.src = picFile.result; function paramsImg() { return img.height; }; }; ``` 1. In your function, you mentioned paramsImg before its even loaded, so its not visible to img.onload. 2. paramsImg is declared simply as function, its not have scope outside the object. You need to use this keyword or mention fn with prototype. ```js function etr(picFile){ this.paramsImg = function(){ return img.height; }; var img = new Image(); img.onload = this.paramsImg; img.src = picFile.result; } picfile = { result: 10 } var vvvr = new etr(picfile); alert(vvvr.paramsImg()); ``` Upvotes: 2
2018/03/17
803
2,640
<issue_start>username_0: I am still extremely new to Python, and I am working on an assignment for my school. I need to write code to pull all of the html from a website then save it to a `csv` file. I believe I somehow need to turn the links into a list and then write the list, but I'm unsure how to do that. This is what I have so far: ``` import bs4 import requests from bs4 import BeautifulSoup, SoupStrainer import csv search_link = "https://www.census.gov/programs-surveys/popest.html" r = requests.get(search_link) raw_html = r.text soup = BeautifulSoup(raw_html, 'html.parser') all_links = soup.find_all("a") rem_dup = set() for link in all_links: hrefs = str(link.get("href")) if hrefs.startswith('#http'): rem_dup.add(hrefs[1:]) elif hrefs.endswith('.gov'): rem_dup.add(hrefs + '/') elif hrefs.startswith('/'): rem_dup.add('https://www.census.gov' + hrefs) else: rem_dup.add(hrefs) filename = "Page_Links.csv" f = open(filename, "w+") f.write("LINKS\n") f.write(all_links) f.close() ```<issue_comment>username_1: Your function `etr` doesn't return anything. I see that you are trying to return from an event handler for onload, but that only returns from the `paramsImg` function and not from `etr` (which has already returned before the image loads). You should wither make `etr` accept a callback function or return a Promise so that you can alert the images height ***after*** the image has loaded. Here is an example with a Promise: ```js function etr() { return new Promise( ( resolve, reject ) => { var img = new Image(); img.onload = _ => resolve(img.height); img.onerror = reject; img.src = picFile.result; } ); }; var picFile = { result: 'https://dummyimage.com/600x1234/000/fff' }; (async function ( ) { var vvvr = await etr(); alert(vvvr); })( ); ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: ``` function etr() { var img = new Image(); img.onload = paramsImg; img.src = picFile.result; function paramsImg() { return img.height; }; }; ``` 1. In your function, you mentioned paramsImg before its even loaded, so its not visible to img.onload. 2. paramsImg is declared simply as function, its not have scope outside the object. You need to use this keyword or mention fn with prototype. ```js function etr(picFile){ this.paramsImg = function(){ return img.height; }; var img = new Image(); img.onload = this.paramsImg; img.src = picFile.result; } picfile = { result: 10 } var vvvr = new etr(picfile); alert(vvvr.paramsImg()); ``` Upvotes: 2
2018/03/17
654
2,490
<issue_start>username_0: Trying to use "dotnet ef" command in Package Manager Console. the PMC is cd to the .csproj directory, and still getting: ``` dotnet : Specify which project file to use because this 'C:\Users\PC-NAME\Source\Repos\TestProject\Test" contains more than one project file. At line:1 char:1 dotnet ef migrations add TestMigration + CategoryInfo : NotSpecified: (Specify which p...e project file.:String) [], RemoteException + FullyQualifiedErrorId : NativeCommandError ``` Tried use -p / --p and point to the .csproj file / directory - still same error. **Using .NET CORE, MVC project, Latest EF Core 2.0.2 version.** **There's no so much valuable data** on the internet about that problem, just a wild guess that dotnet ef command is looking for .exe file to run on. > > Hoping for help. > > ><issue_comment>username_1: You'll need to specify both `-p` and `-s`: ```none dotnet ef ... -p ThisOne.csproj -s ThisOne.csproj ``` where ```none -p --project The project to use. -s --startup-project The startup project to use. ``` [EF Core .NET Command-line Tools](https://learn.microsoft.com/en-us/ef/core/miscellaneous/cli/dotnet) Upvotes: -1 <issue_comment>username_2: If your project contains more than one .csproj file you should move them to another directory other than the project folder and run again your commande line. In my case I have two .csproj in the project folder : CoreWebApplication.csproj and CoreWebApplication-backup.csproj so I moved the CoreWebApplication-backup.csproj and I kept the CoreWebApplication.csproj dotnet ef ... -p CoreWebApplication.csproj -s CoreWebApplication.csproj Upvotes: 2 <issue_comment>username_3: You need to specify your startup project and your data project using `--startup-project` and `-p` respectively. ``` dotnet ef migrations add TestMigration -p .csproj --startup-project .csproj ``` Upvotes: 1 <issue_comment>username_4: In my case I had this Docker compose project created earlier on Visual Studio for MAC which included docker-compose.dcproj file in my folder. I had to move "**docker-compose.dcproj**" file to another folder before it worked. You can also delete that file cos I don't think it is useful anyway. command that worked for me is below for scaffold ``` dotnet ef dbcontext scaffold 'Server=localhost;Database=;User=;Password=;TrustServerCertificate=True;' Microsoft.EntityFrameworkCore.SqlServer -p projectName.csproj -s projectName.csproj -o Data ``` Upvotes: 0
2018/03/17
977
4,171
<issue_start>username_0: I'm writing a genetic algorithm to minimize a function. I have two questions, one in regards to selection and the other with regards to crossover and what to do when it doesn't happen. Here's an outline of what I'm doing: ``` while (number of new population < current population) # Evaluate all fitnesses and give them a rank. Choose individual based on rank (wheel roulette) to get first parent. # Do it again to get second parent, ensuring parent1 =/= parent2 # Elitism (do only once): choose the fittest individual and immediately copy to new generation Multi-point crossover: 50% chance if (crossover happened) do single point mutation on child (0.75%) else pick random individual to be copied into new population. end ``` And all of this is under another `while` loop which tracks fitness progression and number of iterations, which I didn't include. So, my questions: 1. As you can see, two parents are chosen randomly in each iteration until the new population is filled up. So, the two same parents may mate more than once and surely several fit parents will mate many more times than once. Is this in any way bad? 2. In the [obitko tutorial](http://obitko.com/tutorials/genetic-algorithms/ga-basic-description.php), it says if crossover doesn't happen, then child is exact copy of parents. I don't even understand what that means, so, as you can see, I just picked a random parent (uniformly; no fitness considered) and copied to new population. This seems weird to me. Whether I actually do this or not, my results really don't change that much. What's the proper way to handle the case when crossover doesn't happen?<issue_comment>username_1: 1. I don't see anything wrong with the same individual being the parent of more than one child per generation. It can only affect your diversity a little bit. If you don't like this, or find a real lack of diversity at the final generations, you can actually flag the individual so it cannot be parent more than once per generation. 2. I actually don't fully agree with the tutorial, I think after you have selected the individuals that will become parents (based on their fitness, of course) you *should* actually perform the crossover. Otherwise you will be cloning a lot of individuals to the next generation. Upvotes: 0 <issue_comment>username_2: Some parents having several offspring is common; I'd even say this is the default practice (and consider biological evolution, where precisely this is one of the main ingredients). **"If crossover doesn't happen, then child is exact copy of parents"** That is a bit confusing. Crossover (well explained in your link) means taking some genes from one parent and some from the other. This is called sexual reproduction and requires two (or more?) parents. But asexual reproduction is also possible. In this case, you simply take one parent and mutate its genome in the new individual. This is almost what you were attempting, but you are missing the important mutation step (note mutations can be very aggressive or very conservative!) Note that asexual reproduction **requires** mutation after copying the genome to create diversity, while in sexual reproduction this is an optional step. It is fine to use either type of reproduction, or a mix of them. By the way: in some problems genes might not always have the same size. Sexual reproduction is problematic in this case. If you are interested in this problem, take a look at the NEAT algorithm, a popular neuroevolution algorithm designed to address this ([wiki](https://en.wikipedia.org/wiki/Neuroevolution_of_augmenting_topologies) and [paper](http://nn.cs.utexas.edu/downloads/papers/stanley.ec02.pdf)). Finally, elitism (copying the best-performing individuals to the next generation) is common, but it may be problematic. Genetic algorithms often stall in sub-optimal solutions (called local maxima, where any changes decrease fitness). Elitism can contribute to this problem. Of course, the opposite problem is too much diversity being similar to random search, so you need to find the right balance. Upvotes: 2 [selected_answer]
2018/03/17
479
1,457
<issue_start>username_0: Documentation says: ``` fun bundleOf(vararg pairs: Pair): Bundle ``` > > Returns a new Bundle with the given key/value pairs as elements. > > > I tried: ``` val bundle = bundleOf { Pair("KEY_PRICE", 50.0) Pair("KEY_IS_FROZEN", false) } ``` But it is showing error.<issue_comment>username_1: If it takes a `vararg`, you have to supply your arguments as parameters, not a lambda. Try this: ``` val bundle = bundleOf( Pair("KEY_PRICE", 50.0), Pair("KEY_IS_FROZEN", false) ) ``` Essentially, change the `{` and `}` brackets you have to `(` and `)` and add a comma between them. Another approach would be to use Kotlin's `to` function, which combines its left and right side into a `Pair`. That makes the code even more succinct: ``` val bundle = bundleOf( "KEY_PRICE" to 50.0, "KEY_IS_FROZEN" to false ) ``` Upvotes: 6 [selected_answer]<issue_comment>username_2: How about this? ``` val bundle = bundleOf ( "KEY_PRICE" to 50.0, "KEY_IS_FROZEN" to false ) ``` `to` is a great way to create `Pair` objects. The beauty of infix function with awesome readability. Upvotes: 4 <issue_comment>username_3: Just to complete the other answers: First, to use `bundleOf`, need to add `implementation 'androidx.core:core-ktx:1.0.0'` to the `build.gradle` then: ``` var bundle = bundleOf("KEY_PRICE" to 50.0, "KEY_IS_FROZEN" to false) ``` Upvotes: 4
2018/03/17
1,519
4,874
<issue_start>username_0: Im trying ti replicate a sentence classification model from this github <https://github.com/bgmartins/sentence-classification/blob/master/sentence-classification.py> **Here is my code:** ``` data = [ ( row["sentence"] , row["label"] ) for row in csv.DictReader(open("./test-data.txt"), delimiter='\t', quoting=csv.QUOTE_NONE) ] random.shuffle( data ) train_size = int(len(data) * percent) train_texts = [ txt.lower() for ( txt, label ) in data[0:train_size] ] test_texts = [ txt.lower() for ( txt, label ) in data[train_size:-1] ] train_labels = [ label for ( txt , label ) in data[0:train_size] ] test_labels = [ label for ( txt , label ) in data[train_size:-1] ] num_classes = len( set( train_labels + test_labels ) ) tokenizer = Tokenizer(nb_words=max_features, lower=True, split=" ") tokenizer.fit_on_texts(train_texts) train_sequences = sequence.pad_sequences( tokenizer.texts_to_sequences( train_texts ) , maxlen=max_sent_len ) test_sequences = sequence.pad_sequences( tokenizer.texts_to_sequences( test_texts ) , maxlen=max_sent_len ) train_matrix = tokenizer.texts_to_matrix( train_texts ) test_matrix = tokenizer.texts_to_matrix( test_texts ) embedding_weights = np.zeros( ( max_features , embeddings_dim ) ) for word,index in tokenizer.word_index.items(): if index < max_features: try: embedding_weights[index,:] = embeddings[word] except: embedding_weights[index,:] = np.random.rand( 1 , embeddings_dim ) le = preprocessing.LabelEncoder( ) le.fit( train_labels + test_labels ) train_labels = le.transform( train_labels ) test_labels = le.transform( test_labels ) ``` **But when I try to fit the LSTM:** ``` model = Sequential() model.add(Embedding(max_features, embeddings_dim, input_length=max_sent_len, mask_zero=True, weights=[embedding_weights] )) model.add(Dropout(0.25)) model.add(LSTM(output_dim=embeddings_dim , activation='sigmoid', inner_activation='hard_sigmoid', return_sequences=True)) model.add(Dropout(0.25)) model.add(LSTM(output_dim=embeddings_dim , activation='sigmoid', inner_activation='hard_sigmoid')) model.add(Dropout(0.25)) model.add(Dense(1)) model.add(Activation('sigmoid')) if num_classes == 2: model.compile(loss='binary_crossentropy', optimizer='adam', class_mode='binary') else: model.compile(loss='categorical_crossentropy', optimizer='adam') model.fit( train_sequences , train_labels , nb_epoch=30, batch_size=32) ``` **I get this error:** ``` TypeError: run() got an unexpected keyword argument 'class_mode' ``` *Full error traceback:* ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) in () 12 if num\_classes == 2: model.compile(loss='binary\_crossentropy', optimizer='adam', class\_mode='binary') 13 else: model.compile(loss='categorical\_crossentropy', optimizer='adam') ---> 14 model.fit( train\_sequences , train\_labels , nb\_epoch=30, batch\_size=32) 15 results = model.predict\_classes( test\_sequences ) 16 print ("Accuracy = " + repr( sklearn.metrics.accuracy\_score( test\_labels , results ) )) ~\Anaconda3\lib\site-packages\keras\models.py in fit(self, x, y, batch\_size, epochs, verbose, callbacks, validation\_split, validation\_data, shuffle, class\_weight, sample\_weight, initial\_epoch, steps\_per\_epoch, validation\_steps, \*\*kwargs) 961 initial\_epoch=initial\_epoch, 962 steps\_per\_epoch=steps\_per\_epoch, --> 963 validation\_steps=validation\_steps) 964 965 def evaluate(self, x=None, y=None, ~\Anaconda3\lib\site-packages\keras\engine\training.py in fit(self, x, y, batch\_size, epochs, verbose, callbacks, validation\_split, validation\_data, shuffle, class\_weight, sample\_weight, initial\_epoch, steps\_per\_epoch, validation\_steps, \*\*kwargs) 1703 initial\_epoch=initial\_epoch, 1704 steps\_per\_epoch=steps\_per\_epoch, -> 1705 validation\_steps=validation\_steps) 1706 1707 def evaluate(self, x=None, y=None, ~\Anaconda3\lib\site-packages\keras\engine\training.py in \_fit\_loop(self, f, ins, out\_labels, batch\_size, epochs, verbose, callbacks, val\_f, val\_ins, shuffle, callback\_metrics, initial\_epoch, steps\_per\_epoch, validation\_steps) 1233 ins\_batch[i] = ins\_batch[i].toarray() 1234 -> 1235 outs = f(ins\_batch) 1236 if not isinstance(outs, list): 1237 outs = [outs] ~\Anaconda3\lib\site-packages\keras\backend\tensorflow\_backend.py \_\_call\_\_(self, inputs) 2476 session = get\_session() 2477 updated = session.run(fetches=fetches, feed\_dict=feed\_dict, -> 2478 \*\*self.session\_kwargs) 2479 return updated[:len(self.outputs)] 2480 ```<issue_comment>username_1: `model.compile` does not take any parameter called "class\_mode", remove it. Upvotes: 2 <issue_comment>username_2: See: <https://keras.io/preprocessing/image/> `class_mode` is used with generators. Upvotes: 1
2018/03/17
1,236
4,581
<issue_start>username_0: In the homepage there is a menu with some categories and below there are the latest 10 posts. A post can have many categories and one category can belong to many posts so there are 2 models and a pivot table "category\_post" with 2 columns: id and name. So in the homepage, there is a menu with some categories and the posts: ``` @foreach($categories->take(6) as $category) * {{$category->name}} @endforeach @foreach($posts as $post) ![Card image cap]({{$post->image}}) ##### {{$post->name}} [More]({{route('posts.show', ['id' => $post->id, 'slug' => $post->slug])}}) @endforeach ``` I want that when each category is clicked to show only the posts of that category in the homepage, but in the same homepage, not in a specific category page. So maybe the best approach is using AJAX. Im not understanding what is necessary in the controllers and routes and the jquery is also dont working properly. Do you know how what is necessary to list the posts when a category is clicked with AJAX? In the **Frontcontroller** I already pass the categories and posts to the homepage view: ``` class FrontController extends Controller public function index(){ return view('home') ->with('categories', Category::orderBy('created_at', 'desc')->get()) ->with('posts', Post::orderBy('created_at','desc')->take(10)->get()); } } ``` **route to the homepage:** ``` Route::get('/', [ 'uses' => 'FrontController@index', 'as' =>'index' ]); ``` **Post and Category models:** ``` class Post extends Model { public function categories(){ return $this->belongsToMany('App\Category'); } } class Category extends Model { public function posts(){ return $this->belongsToMany('App\Post'); } } ``` **In this homepage I have at bottom the ajax below** ``` @section('scripts') $(function() { $("a[name='category']").on('click', function(){ var category\_id =$(this).attr("id"); alert("test"); $.get('/ajax-category?category\_id=' + category\_id, function (data) { $('#posts').empty(); $.each(data,function(index, postObj){ $('#posts').append(''); }); }); }); }); @stop ``` **Pivot table "category\_post" structure** ``` public function up() { Schema::create('category_post', function (Blueprint $table) { $table->increments('id'); $table->integer('post_id'); $table->integer('category_id'); $table->timestamps(); }); } ```<issue_comment>username_1: Just in your API route add a new route for your SQL queries, something like this following route : ``` Route::get('posts/where/category/{id}','\Posts@WhereHasCategory')->name('category.posts'); ``` Then in your WhereHasCategory function return related posts : ### API Controller (WhereHasCategory) : ``` public function WhereHasCategory(Request $request) { $posts = Post::whereHas('categories', function ($categories) use (&$request) { $categories->where('id',$request->id); })->get(); return response()->json($posts); } ``` ### Post Model (Many To Many Relationships) : ``` public function Categories () { return $this->belongsToMany(Category::class,'post_category','post_id','category_id'); } ``` You will get those posts that have $request->id category For your Ajax part, you have to make an ajax request to the above route, You will get the posts which you want, so once you're done change the content of your home page (If you are using VueJS or similar Javascript framework it's really easy) ... That is all, I think! Try this for your ajax request : ```html [Category](#) Title : Posts Are Here ! $(function() { $("a[name='category']").on('click', function(){ var category\_id = $(this).attr("id"); $.ajax({ url:"https://jsonplaceholder.typicode.com/posts", type: 'GET', success:function(result){ $('#posts').empty(); $.each(result,function(index, postObj){ $('#posts').append("<li>"+postObj.title+"</li><p>"+postObj.body+"</p>"); }); }, error: function(error) { console.log(error.status) } }); }); }); ``` * Always keep your Javascript queries at the end of the page. Upvotes: 3 [selected_answer]<issue_comment>username_2: I think your ajax not working because your page is refreshing when you click the category link, try to add hashtag in the href property. exsampel: ``` @foreach($categories->take(6) as $category) * [{{$category->name}}](#) @endforeach ``` or change with . sorry for my English. I hope it would help. Upvotes: 0
2018/03/17
1,358
4,615
<issue_start>username_0: I'm creating an online courses website, in which you create a course object and add chapter objects to it. Each chapter has a foreign key to a course object: ``` class Course(models.Model): title = models.CharField(max_length=140, null=False) description = models.TextField(null=False) cover = models.ImageField(null=False) slug = models.SlugField(max_length=100, blank=True) pack = models.ForeignKey(CoursePack, on_delete=models.SET_NULL, null=True, blank=True) tag = models.ForeignKey(CourseGroup, on_delete=models.SET_NULL, null=True, blank=True) def __str__(self): return self.title def save(self, *args, **kwargs): if not self.id: self.slug = slugify(self.title) super(Course, self).save(*args, **kwargs) def get_absolute_url(self): return reverse("course_detail", kwargs={'pk':self.pk,'slug': self.slug}) signals.pre_save.connect(instance_pre_save, sender="course.Course") class Chapter(models.Model): title = models.CharField(max_length=140, null=False) text = RichTextUploadingField() slug = models.SlugField(max_length=100, blank=True) course = models.ForeignKey(Course, on_delete=models.CASCADE, null=False, blank=False) def __str__(self): return self.title def save(self, *args, **kwargs): if not self.id: self.slug = slugify(self.title) super(Chapter, self).save(*args, **kwargs) def get_absolute_url(self): return reverse("chapter_detail", kwargs={'pk':self.pk,'slug': self.slug, "cpk":self.course.pk, "cslug":self.course.slug}) ``` my view is like this: ``` def chapter_view(request, pk, slug, cpk, cslug): context = {} chapter = Chapter.objects.get(pk=pk, slug=slug, cpk=cpk, cslug=cslug) context["chapter"] = chapter return render(request, "courses/chapter_detail.html", context) ``` And here is the url path: ``` re_path(r'^(?P[-\w]+),(?P\d+)/(?P[-\w]+),(?P\d+)/$', chapter\_view, name='chapter'), ``` And here is the anchor tag: ``` [{{ chapter }}]({% url 'chapter' slug=chapter.slug cslug=chapter.cslug pk=chapter.pk cpk=chapter.cpk %}) ``` I'm getting the error: > > FieldError at /cursos/name-of-the-course,course-pk-integer/name-of-the-chapter,chapter-pk/ > > > Cannot resolve keyword 'cpk' into field. Choices are: course, > course\_id, id, slug, text, title > > > Which shows that the URL is found, but I don't get why it doesn't just work.<issue_comment>username_1: Just in your API route add a new route for your SQL queries, something like this following route : ``` Route::get('posts/where/category/{id}','\Posts@WhereHasCategory')->name('category.posts'); ``` Then in your WhereHasCategory function return related posts : ### API Controller (WhereHasCategory) : ``` public function WhereHasCategory(Request $request) { $posts = Post::whereHas('categories', function ($categories) use (&$request) { $categories->where('id',$request->id); })->get(); return response()->json($posts); } ``` ### Post Model (Many To Many Relationships) : ``` public function Categories () { return $this->belongsToMany(Category::class,'post_category','post_id','category_id'); } ``` You will get those posts that have $request->id category For your Ajax part, you have to make an ajax request to the above route, You will get the posts which you want, so once you're done change the content of your home page (If you are using VueJS or similar Javascript framework it's really easy) ... That is all, I think! Try this for your ajax request : ```html [Category](#) Title : Posts Are Here ! $(function() { $("a[name='category']").on('click', function(){ var category\_id = $(this).attr("id"); $.ajax({ url:"https://jsonplaceholder.typicode.com/posts", type: 'GET', success:function(result){ $('#posts').empty(); $.each(result,function(index, postObj){ $('#posts').append("<li>"+postObj.title+"</li><p>"+postObj.body+"</p>"); }); }, error: function(error) { console.log(error.status) } }); }); }); ``` * Always keep your Javascript queries at the end of the page. Upvotes: 3 [selected_answer]<issue_comment>username_2: I think your ajax not working because your page is refreshing when you click the category link, try to add hashtag in the href property. exsampel: ``` @foreach($categories->take(6) as $category) * [{{$category->name}}](#) @endforeach ``` or change with . sorry for my English. I hope it would help. Upvotes: 0
2018/03/17
537
1,993
<issue_start>username_0: Assume I have an assortment of colors the user can put in. For now let's say we have three colors, red, blue, and yellow. Each color has a color ID, Red(1), Blue(2), and Yellow(3). I'm wondering if I can make a table that's something like this: ``` CREATE TABLE Colors ( Color CHAR(10) NOT NULL ,Color_ID INT NOT NULL CHECK (*Does Color ID match the color?*) ) GO ``` I.e. if you put in Red, the ID must be "1". Is this possible? Thanks!<issue_comment>username_1: Yes, they are usually called "lookup tables", you add an ID column and a value, then you reference the ID column in your other tables. As shawnt00 says (in the comments), you can set up referential integrity between tables using a "Foreign Key Constraint", so that the other tables can not have values for other colors than the ones you have defined in your "Colors" table. Upvotes: 0 <issue_comment>username_2: You can validate the colors in one of two ways. One method is to use a check constraint in the table that contains them. This would not be the "colors" table, but something else. For instance: ``` create table blouses blouse_id int identity(1, 1) not null, purchase_date date, color varchar(255), constraint chk_blouses_color check (color in ('red', 'blue', 'yellow')) ); ``` In this method, the colors are stored not as numbers but strings. The constraint guarantees that the values are correct. The second way to ensure correct values is with a foreign key constraint. This might look like: ``` create Colors ( color_id int identity(1, 1) primary key Color varchar(10) not null unique ); insert into colors (color) values ('red'), ('blue'), ('yellow'); ``` This would then be referenced as: ``` create table blouses blouse_id int identity(1, 1) not null, purchase_date date, color_id int not null, constraint fk_blouses_color foreign key (color_id) references colors(color_id) ); ``` Upvotes: 3 [selected_answer]
2018/03/17
238
798
<issue_start>username_0: ``` 4 7.99 31.96 10 ``` invoice i\_invoicenumber i\_invoicedate i\_payment i\_emailaddress i\_subtotal ``` 50 06-FEB-18 Cash 40 <EMAIL>@test ```<issue_comment>username_1: Give this a try... ``` UPDATE invoice i INNER JOIN ( SELECT invoice_number, sum(total) as line_item_total FROM line_item GROUP BY invoice_number ) li ON i.invoice_number = li.invoice_number SET i.subtotal = li.line_item_total ``` Upvotes: 0 <issue_comment>username_2: In Oracle, you can use a subquery: ``` update invoice i set subtotal = (select sum(li.li_total) from line_item li where i.invoice_number = li.invoice_number ); ``` Upvotes: 2 [selected_answer]
2018/03/17
709
2,387
<issue_start>username_0: I have made a Google Sheet for handling my medication. In column A, I have a dropdown of medication that I take. The time I initially take the medication isn't important, but what is important is when I take the next pill and it is different for each one. One pill, after taking it, I need to take the next one 6 hours later while another, I don't take until 24 hours later. I go to sleep at different times every day so when I take the initial pill depends on when I wake up or when I eat breakfast or such. The most important one is the one that I need to wait 6 hours before I can take it again. That is my pain pill. So far, I have the sheet setup so that I have the drop down list of medication and column 2 inputs the exact date and time that I take the pill. The 3rd column, I have a formula that is set to take column 2 and add 6 hours to it which is not correct for all meds, so that is where I need help. I know it can be done, but I don't know who to ask. If I can get it so that the 3rd column will see what medication I chose then depending on which one it will put the time that I need to take the next one. Here is what I have so far but using a formula to add 6 hours which applies to all and that is why the formula won't work: **Worksheet**: ``` Medication Pill-taken-at +6 hours ---------------------------------------------------------- Mycoxofloppin 2018-03-17 12:44 2018-03-17 18:44 Maryjaneaspliff 2018-03-16 04:20 2018-03-16 10:20 ``` **Script**: ``` function onEdit(e) { var timezone = "GMT-5"; var timestamp_format = "yyyy-MM-dd hh:mm"; var ss = e.source.getActiveSheet(); var start = 2; var end = 4; var row = e.range.getRow(); if (e.range.getColumn() == 1){ ss.getRange(row, start).setValue(new Date()); } } ```<issue_comment>username_1: Give this a try... ``` UPDATE invoice i INNER JOIN ( SELECT invoice_number, sum(total) as line_item_total FROM line_item GROUP BY invoice_number ) li ON i.invoice_number = li.invoice_number SET i.subtotal = li.line_item_total ``` Upvotes: 0 <issue_comment>username_2: In Oracle, you can use a subquery: ``` update invoice i set subtotal = (select sum(li.li_total) from line_item li where i.invoice_number = li.invoice_number ); ``` Upvotes: 2 [selected_answer]
2018/03/17
3,875
12,907
<issue_start>username_0: I'm using [fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API) within my React app. The application was deployed on a server and was working perfectly. I tested it multiple times. But, suddenly the application stopped working and I've no clue why. The issue is when I send a `get` request, I'm receiving a valid response from the server but also the fetch API is catching an exception and showing `TypeError: Failed to fetch`. I didn't even make any changes to the code and it's the issue with all of the React components. I'm getting a valid response: [![enter image description here](https://i.stack.imgur.com/bIITr.png)](https://i.stack.imgur.com/bIITr.png) But also getting this error at the same time: [![enter image description here](https://i.stack.imgur.com/8NUNH.png)](https://i.stack.imgur.com/8NUNH.png) ``` fetch(url) .then(res => res.json()) .then(data => { // do something with data }) .catch(rejected => { console.log(rejected); }); ``` When I remove credentials: "include", it works on localhost, but not on the server. I tried every solution given on StackOverflow and GitHub, but it's just not working out for me.<issue_comment>username_1: Note that there is an unrelated issue in your code but that could bite you later: you should `return res.json()` or you will not catch any error occurring in JSON parsing or your own function processing data. Back to your error: You *cannot* have a `TypeError: failed to fetch` with a successful request. You probably have another request (check your "network" panel to see all of them) that breaks and causes this error to be logged. Also, maybe check "Preserve log" to be sure the panel is not cleared by any indelicate redirection. Sometimes I happen to have a persistent "console" panel, and a cleared "network" panel that leads me to have error in console which is actually unrelated to the visible requests. You should check that. Or you (but that would be vicious) actually have a hardcoded `console.log('TypeError: failed to fetch')` in your final `.catch` ;) and the error is in reality in your `.then()` but it's hard to believe. Upvotes: 3 <issue_comment>username_2: This could be an issue with the response you are receiving from the backend. If it was working fine on the server then the problem could be within the response headers. Check the value of `Access-Control-Allow-Origin` in the response headers. Usually fetch API will throw fail to fetch even after receiving a response when the response headers' `Access-Control-Allow-Origin` and the origin of request won't match. Upvotes: 8 [selected_answer]<issue_comment>username_3: I've simply input "http://" before "localhost" in the url. Upvotes: 5 <issue_comment>username_4: If your are invoking fetch on a localhost server, use non-SSL unless you have a valid certificate for localhost. fetch will fail on an invalid or self signed certificate especially on localhost. Upvotes: 3 <issue_comment>username_5: I understand this question might have a React-specific cause, but it shows up first in search results for "Typeerror: Failed to fetch" and I wanted to lay out all possible causes here. The Fetch spec lists times when you throw a TypeError from the Fetch API: <https://fetch.spec.whatwg.org/#fetch-api> Relevant passages as of January 2021 are below. These are excerpts from the text. 4.6 HTTP-network fetch > > To perform an HTTP-network fetch using request with an optional credentials flag, run these steps: > > ... > > 16. Run these steps in parallel: > > ... > > 2. If aborted, then: > > ... > > 3. Otherwise, if stream is readable, error stream with a TypeError. > > > > > To append a name/value name/value pair to a Headers object (headers), run these steps: > > > 1. Normalize value. > 2. If name is not a name or value is not a value, then throw a TypeError. > 3. If headers’s guard is "immutable", then throw a TypeError. > > > Filling Headers object headers with a given object object: > > To fill a Headers object headers with a given object object, run these steps: > > > 1. If object is a sequence, then for each header in object: > 1. If header does not contain exactly two items, then throw a TypeError. > > > Method steps sometimes throw TypeError: > > The delete(name) method steps are: > > > 1. If name is not a name, then throw a TypeError. > 2. If this’s guard is "immutable", then throw a TypeError. > > > > > The get(name) method steps are: > > > 1. If name is not a name, then throw a TypeError. > 2. Return the result of getting name from this’s header list. > > > > > The has(name) method steps are: > > > 1. If name is not a name, then throw a TypeError. > > > > > The set(name, value) method steps are: > > > 1. Normalize value. > 2. If name is not a name or value is not a value, then throw a TypeError. > 3. If this’s guard is "immutable", then throw a TypeError. > > > > > To extract a body and a `Content-Type` value from object, with an optional boolean keepalive (default false), run these steps: > > ... > > 5. Switch on object: > > ... > > ReadableStream > > If keepalive is true, then throw a TypeError. > > If object is disturbed or locked, then throw a TypeError. > > > In the section "Body mixin" if you are using FormData there are several ways to throw a TypeError. I haven't listed them here because it would make this answer very long. Relevant passages: <https://fetch.spec.whatwg.org/#body-mixin> In the section "Request Class" the new Request(input, init) constructor is a minefield of potential TypeErrors: > > The new Request(input, init) constructor steps are: > > ... > > 6. If input is a string, then: > > ... > > 2. If parsedURL is a failure, then throw a TypeError. > > 3. IF parsedURL includes credentials, then throw a TypeError. > > ... > > 11. If init["window"] exists and is non-null, then throw a TypeError. > > ... > > 15. If init["referrer" exists, then: > > ... > > 1. Let referrer be init["referrer"]. > > 2. If referrer is the empty string, then set request’s referrer to "no-referrer". > > 3. Otherwise: > > 1. Let parsedReferrer be the result of parsing referrer with baseURL. > > 2. If parsedReferrer is failure, then throw a TypeError. > > ... > > 18. If mode is "navigate", then throw a TypeError. > > ... > > 23. If request's cache mode is "only-if-cached" and request's mode is not "same-origin" then throw a TypeError. > > ... > > 27. If init["method"] exists, then: > > ... > > 2. If method is not a method or method is a forbidden method, then throw a TypeError. > > ... > > 32. If this’s request’s mode is "no-cors", then: > > 1. If this’s request’s method is not a CORS-safelisted method, then throw a TypeError. > > ... > > 35. If either init["body"] exists and is non-null or inputBody is non-null, and request’s method is `GET` or `HEAD`, then throw a TypeError. > > ... > > 38. If body is non-null and body's source is null, then: > > 1. If this’s request’s mode is neither "same-origin" nor "cors", then throw a TypeError. > > ... > > 39. If inputBody is body and input is disturbed or locked, then throw a TypeError. > > > > > The clone() method steps are: > > > 1. If this is disturbed or locked, then throw a TypeError. > > > In the Response class: > > The new Response(body, init) constructor steps are: > > ... > > 2. If init["statusText"] does not match the reason-phrase token production, then throw a TypeError. > > ... > > 8. If body is non-null, then: > > 1. If init["status"] is a null body status, then throw a TypeError. > > ... > > > > > The static redirect(url, status) method steps are: > > ... > > 2. If parsedURL is failure, then throw a TypeError. > > > > > The clone() method steps are: > > > 1. If this is disturbed or locked, then throw a TypeError. > > > In section "The Fetch method" > > The fetch(input, init) method steps are: > > ... > > 9. Run the following in parallel: > > To process response for response, run these substeps: > > ... > > 3. If response is a network error, then reject p with a TypeError and terminate these substeps. > > > In addition to these potential problems, there are some browser-specific behaviors which can throw a TypeError. For instance, if you set keepalive to true and have a payload > 64 KB you'll get a TypeError on Chrome, but the same request can work in Firefox. These behaviors aren't documented in the spec, but you can find information about them by Googling for limitations for each option you're setting in fetch. Upvotes: 5 <issue_comment>username_6: In my case I got "TypeError" when using online JS tools like jsfiddle or stackblitz and the problem was that my url was http instead of https. Upvotes: 2 <issue_comment>username_7: Behind the scenes the XHR client sends an HTTPOPTIONS request, called pre-flight, to look for certain security permission headers, called CORS Allow headers. If the required headers are not found (as in you might not have the necessary permissions), TypeError is the result since it has not actually tried to send your POST/GET request. You can observe this behavior in the browser console: it looks like the browser makes two requests to the same location, first having the HTTP Method: OPTIONS. Upvotes: 2 <issue_comment>username_8: I was getting this issue since I have not added `CORS` in my **flask** app. I need to add it, then it worked, here are the lines: ```py ... from flask_cors import CORS def create_app(test_config=None ): app = Flask(__name__) app.config.from_object('config') # Import things from config CORS(app) # CORS Headers @app.after_request def after_request(response): response.headers.add('Access-Control-Allow-Headers', 'Content-Type,Authorization,true') response.headers.add('Access-Control-Allow-Methods', 'GET,PUT,POST,DELETE,OPTIONS') return response ... ``` Upvotes: 1 <issue_comment>username_9: This question comes up first when googling the same error, so sharing the solution here for others: If you use Sentry, you need to add the 'sentry-trace' value to your servers Access-Control-Allow-Headers list. Issue here <https://github.com/getsentry/sentry-javascript/issues/3185> Upvotes: 0 <issue_comment>username_10: In my case, this error was caused by the missing `always` parameter to the `add_header` directive of nginx. For example, when our backend was sending an error response, such as in PHP `header('HTTP', ERROR_CODE)`, was resulting in CORS headers missing from response. As the [docs](http://nginx.org/en/docs/http/ngx_http_headers_module.html) states about `add_header` directive > > Adds the specified field to a response header provided that the response code equals 200, 201 (1.3.10), 204, 206, 301, 302, 303, 304, 307 (1.1.16, 1.0.13), or 308 (1.13.0). > > > and about `always` parameter > > If the always parameter is specified (1.7.5), the header field will be added regardless of the response code. > > > Adding the `always` parameter to any required headers fixed the issue for me, like so: ``` add_header HEADER_NAME "HEADER_VALUE" always; ``` Upvotes: 1 <issue_comment>username_11: after struggling a few hours on that error, I would like to share my solution ``` Calculate ``` There was a problem with **some\_props\_with\_default\_value** that generating a page refresh. The page refresh was canceling the fetch request before getting a response and raising this error. Went back to normal when fixing the page refresh : ``` Calculate ``` Hope it helps someone. Upvotes: 0 <issue_comment>username_12: In my case I was trying to make fetch requests to my Django API via localhost and I was able to make it work by changing my URL from: ``` var url = "http://127.0.0.1:8000/"; ``` to: ``` var url = "http://localhost:8000/"; ``` Upvotes: 0 <issue_comment>username_13: I spent a few hours on this error in my project involving Vue / Nuxt 3.0.0 and Supabase Edge Functions. I finally realized that I wasn't including the corsHeader on the success response, only the error response. So everything worked, and it returned Status Code 201, but it popped as error. Simple solution to a frustrating problem. Left this here because my search brought me back here multiple times. Hope it helps! See how they add ...corsHeaders to both the successful and error responses here: <https://supabase.com/docs/guides/functions/cors> Upvotes: 2 <issue_comment>username_14: I was getting this issue when upload file, the file size is too large, choose smaller file works, [reference to this post](https://stackoverflow.com/questions/53736912/neterr-connection-reset-when-large-file-takes-longer-than-a-minute). Upvotes: 0
2018/03/17
2,396
5,936
<issue_start>username_0: What is the difference between `git push --all` and `git push --mirror`? I only know this: * With deleted local branch, `--all` doesn't push it and `--mirror` does. This is correct? Any other differences?<issue_comment>username_1: As it says in [the documentation](https://www.kernel.org/pub/software/scm/git/docs/git-push.html): > > --all > > > Push all branches (i.e. refs under `refs/heads/`); cannot be used > with other . > > --mirror > > > ... specifies that all refs under `refs/` (which includes but is not limited to `refs/heads/`, `refs/remotes/`, and `refs/tags/`) be mirrored ... > > So *a*, if not *the*, key difference is that one means `refs/heads/*` and one means `refs/*`. The `refs/heads/*` names are the branch names. Anything in `refs/remotes/` is a remote-tracking name, and anything in `refs/tags/` is a tag name. Other notable name-spaces include `refs/notes/`, `refs/replace/`, and the singular `refs/stash`. The `--mirror` option goes on to mention: > > locally updated refs will be force updated on the remote end, > and deleted refs will be removed from the remote end. > > > Hence `--mirror` effectively implies both `--force` and `--prune`; `--all` does not. You can, however, add `--force` and/or `--prune` to `git push --all`, if you like. It is always up to the *other* Git to decide whether to obey polite requests (those sent without `--force`) or commands (`--force`) to make changes to its references. > > With deleted local branch, `--all` doesn't push it and `--mirror` does. > > > This is a consequence of the `--prune` option: telling your Git to use `--prune` means "ask them to delete names in their name-space(s) that are not in mine". Upvotes: 6 [selected_answer]<issue_comment>username_2: With Git 2.24 (Q4 2019), you won't be able to use `git push --all` with `--mirror`. And the problem is: `--all` is sometime *implied*, when you are *pushing* from a local repository you just cloned with `--mirror`. [<NAME>](https://twitter.com/FiloSottile) made the unfortunate experience [recently](https://twitter.com/filosottile/status/1163918701462249472): > > Ok, git, WTF. This is not in the man pages. > > > [![https://pbs.twimg.com/media/ECcR4vaXsAQ4bfm?format=jpg&name=4096x4096](https://i.stack.imgur.com/muGSY.jpg)](https://i.stack.imgur.com/muGSY.jpg) So, fix an earlier regression to "`git push --all`" which should have been forbidden when the target remote repository is set to be a mirror. See [commit 8e4c8af](https://github.com/git/git/commit/8e4c8af058d7eb1b887184deb3bf79f3818b3a65) (02 Sep 2019) by [<NAME> (`tgummerer`)](https://github.com/tgummerer). (Merged by [<NAME> -- `gitster` --](https://github.com/gitster) in [commit fe048e4](https://github.com/git/git/commit/fe048e4fd905e5c45b07381339ef627f57cf0822), 30 Sep 2019) > > `push`: disallow `--all` and refspecs when `remote..mirror` is set > ------------------------------------------------------------------ > > > > > Pushes with `--all`, or refspecs are disallowed when `--mirror` is given to 'git push', or when '`remote..mirror`' is set in the config of the repository, because they can have surprising effects. > > [800a4ab](https://github.com/git/git/commit/800a4ab399e954b8970897076b327bf1cf18c0ac) ("`push`: check for errors earlier", 2018-05-16, Git v2.18.0-rc0) refactored this code to do that check earlier, so we can explicitly check for the presence of flags, instead of their side-effects. > > > However when '`remote..mirror`' is set in the config, the `TRANSPORT_PUSH_MIRROR` flag would only be set after we calling '`do_push()`', so the checks would miss it entirely. > > > This leads to surprises for users (see above). > > > Fix this by making sure we set the flag (if appropriate) before checking for compatibility of the various options. > > > That leads to, with Git 2.29 (Q4 2020), a code cleanup. See [commit 842385b](https://github.com/git/git/commit/842385b8a4fa56678f13cda599ee96463004e7bf), [commit 9dad073](https://github.com/git/git/commit/9dad073d4b9b6679c3e86f28aacf79f85ed786d2), [commit 26e28fe](https://github.com/git/git/commit/26e28fe7bbdf8b22ed096dfd76a9311e86ffb200), [commit 75d3bee](https://github.com/git/git/commit/75d3bee15778a3aaeb8234023fc224838ac53181), [commit 20f4b04](https://github.com/git/git/commit/20f4b044a681fffd469cc9ddcf055580a20fd612), [commit 5b9427e](https://github.com/git/git/commit/5b9427e0ac4d4b6c96f23fc5eb9b047a27563c65), [commit 8d2aa8d](https://github.com/git/git/commit/8d2aa8dfac4048c964453a8983f1dc12ecdfe1c3), [commit 424e28f](https://github.com/git/git/commit/424e28fcadfe0a40e444687c10fb4eaff8360f8d), [commit e885a84](https://github.com/git/git/commit/e885a84f1bc660adfc1dea5f6c25d0a92c7c9dbc), [commit 185e865](https://github.com/git/git/commit/185e86522678fed077d6cec02381bcf899bf24e5) (30 Sep 2020) by [<NAME> (`peff`)](https://github.com/peff). (Merged by [<NAME> -- `gitster` --](https://github.com/gitster) in [commit 19dd352](https://github.com/git/git/commit/19dd352d03adc75d0b6530975a44b7bb23c69063), 05 Oct 2020) > > [`push`](https://github.com/git/git/commit/5b9427e0ac4d4b6c96f23fc5eb9b047a27563c65): drop unused repo argument to `do_push()` > ------------------------------------------------------------------------------------------------------------------------------ > > > Signed-off-by: <NAME> > > > > > We stopped using the "repo" argument in [8e4c8af058](https://github.com/git/git/commit/8e4c8af058d7eb1b887184deb3bf79f3818b3a65) ("`push`: disallow --all and refspecs when remote..mirror is set", 2019-09-02, Git v2.24.0-rc0 -- [merge](https://github.com/git/git/commit/fe048e4fd905e5c45b07381339ef627f57cf0822) listed in [batch #4](https://github.com/git/git/commit/bc12974a897308fd3254cf0cc90319078fe45eea)), which moved the `pushremote` handling to its caller. > > > Upvotes: 2
2018/03/17
741
2,084
<issue_start>username_0: I have this object: ``` obj = {key : val, key1 : val1, key2 : {key21 : val21,key22 : val22}, key3 : val3} ``` I want to generate a new object to be like: ``` objnew = {key : val, key1 : val1, key21 : val21, key22 : val22, key3 : val3} ```<issue_comment>username_1: For 2-level object structure (with `Object.keys()` and `Object.assign()` functions): ```js var obj = { 'key' : 'val', 'key1' : 'val1', 'key2' : {'key21' : 'val21', 'key22' : 'val22'}, 'key3' : 'val3' }; Object.keys(obj).forEach(function(k){ if (typeof obj[k] === 'object') { Object.assign(obj, obj[k]); delete obj[k]; } }); console.log(obj); ``` Upvotes: 0 <issue_comment>username_2: Here you go: ``` var obj = { key : 1, key1 : 2, key2 : { key21 : 3,key22 : 4 }, key3 : 5 }; Object.keys(obj).forEach(function(key) { if(typeof obj[key] === 'object') { Object.keys(obj[key]).forEach(function(innerKey) { obj[innerKey] = obj[key][innerKey]; }); delete obj[key]; } }); console.log(obj); ``` Upvotes: 0 <issue_comment>username_3: As per your comments, if you *really* want to keep the order for whatever reason, and don't want to do it for anything but `key2`, here's a possible solution. Please read [this question](https://stackoverflow.com/questions/30076219/does-es6-introduce-a-well-defined-order-of-enumeration-for-object-properties/30919039) for information about order of object keys. In short, it's most likely a bad idea to rely on it in most cases. You'd be better off using a [`Map`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map) instance or just an array. ```js let obj = { key: 0, key1: 1, key2: {key21: 21, key22: 22}, key3: 3 }; let objArray = Object.keys(obj).map(key => ({key, value: obj[key]})); let result = objArray.reduce((result, entry) => Object.assign(result, entry.key === 'key2' ? entry.value : {[entry.key]: entry.value}) , {}); console.log(result); ``` Upvotes: 1
2018/03/17
744
2,141
<issue_start>username_0: I am using create-react-app. When you go to localhost:3001/test - it *does* serve up the HTML. But all you see is an empty page because nothing in the id "root" is rendered. This is the code I have in my server: ``` app.get('/test', (req, res) => { res.sendFile(__dirname + '/public/index.html'); }) ``` and my HTML is close to this: ``` ```<issue_comment>username_1: For 2-level object structure (with `Object.keys()` and `Object.assign()` functions): ```js var obj = { 'key' : 'val', 'key1' : 'val1', 'key2' : {'key21' : 'val21', 'key22' : 'val22'}, 'key3' : 'val3' }; Object.keys(obj).forEach(function(k){ if (typeof obj[k] === 'object') { Object.assign(obj, obj[k]); delete obj[k]; } }); console.log(obj); ``` Upvotes: 0 <issue_comment>username_2: Here you go: ``` var obj = { key : 1, key1 : 2, key2 : { key21 : 3,key22 : 4 }, key3 : 5 }; Object.keys(obj).forEach(function(key) { if(typeof obj[key] === 'object') { Object.keys(obj[key]).forEach(function(innerKey) { obj[innerKey] = obj[key][innerKey]; }); delete obj[key]; } }); console.log(obj); ``` Upvotes: 0 <issue_comment>username_3: As per your comments, if you *really* want to keep the order for whatever reason, and don't want to do it for anything but `key2`, here's a possible solution. Please read [this question](https://stackoverflow.com/questions/30076219/does-es6-introduce-a-well-defined-order-of-enumeration-for-object-properties/30919039) for information about order of object keys. In short, it's most likely a bad idea to rely on it in most cases. You'd be better off using a [`Map`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map) instance or just an array. ```js let obj = { key: 0, key1: 1, key2: {key21: 21, key22: 22}, key3: 3 }; let objArray = Object.keys(obj).map(key => ({key, value: obj[key]})); let result = objArray.reduce((result, entry) => Object.assign(result, entry.key === 'key2' ? entry.value : {[entry.key]: entry.value}) , {}); console.log(result); ``` Upvotes: 1
2018/03/17
937
2,875
<issue_start>username_0: I've been trying to write an automated uploader that will sort through a directory and upload the next file in the directory with each run. I read that I should first create a text file that will write the names of the files. ``` static String fileName; static int i; public static void incrementalChoice() throws IOException { FileWriter fileWriter = new FileWriter("fileNames.txt"); PrintWriter printWriter = new PrintWriter(fileWriter); File[] fileArray = new File(pathToFiles).listFiles(); try { while(i < fileArray.length) { fileName = fileArray.toString(); printWriter.println(fileName); i++; } printWriter.close(); } catch (Exception e) { e.printStackTrace(); } } ``` But every time I run it, the result of the text is the following list of files named like that. ``` [Ljava.io.File;@659a969b [Ljava.io.File;@659a969b [Ljava.io.File;@659a969b [Ljava.io.File;@659a969b [Ljava.io.File;@659a969b ``` Anyone knows how I can instead have the actual name of the file rather than that?<issue_comment>username_1: For 2-level object structure (with `Object.keys()` and `Object.assign()` functions): ```js var obj = { 'key' : 'val', 'key1' : 'val1', 'key2' : {'key21' : 'val21', 'key22' : 'val22'}, 'key3' : 'val3' }; Object.keys(obj).forEach(function(k){ if (typeof obj[k] === 'object') { Object.assign(obj, obj[k]); delete obj[k]; } }); console.log(obj); ``` Upvotes: 0 <issue_comment>username_2: Here you go: ``` var obj = { key : 1, key1 : 2, key2 : { key21 : 3,key22 : 4 }, key3 : 5 }; Object.keys(obj).forEach(function(key) { if(typeof obj[key] === 'object') { Object.keys(obj[key]).forEach(function(innerKey) { obj[innerKey] = obj[key][innerKey]; }); delete obj[key]; } }); console.log(obj); ``` Upvotes: 0 <issue_comment>username_3: As per your comments, if you *really* want to keep the order for whatever reason, and don't want to do it for anything but `key2`, here's a possible solution. Please read [this question](https://stackoverflow.com/questions/30076219/does-es6-introduce-a-well-defined-order-of-enumeration-for-object-properties/30919039) for information about order of object keys. In short, it's most likely a bad idea to rely on it in most cases. You'd be better off using a [`Map`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map) instance or just an array. ```js let obj = { key: 0, key1: 1, key2: {key21: 21, key22: 22}, key3: 3 }; let objArray = Object.keys(obj).map(key => ({key, value: obj[key]})); let result = objArray.reduce((result, entry) => Object.assign(result, entry.key === 'key2' ? entry.value : {[entry.key]: entry.value}) , {}); console.log(result); ``` Upvotes: 1
2018/03/17
803
2,665
<issue_start>username_0: I have confluent kafka, zookeeper, schema-registry and ksql running in containers on Kubernetes cluster. Kafka, zookeeper and schema registry works fine, a can create topic and write data in Avro format, but when I'm trying to check ksql and create some streaming with curl like: ``` curl -XPOST http://ksql-svc.someapp:8080/ksql -H "Content-Type: application/json" -d $' {"ksql": "CREATE STREAM kawabanga_stream (log_id varchar, created_date varchar) WITH (kafka_topic = '\'kawabanga\'', value_format = '\'avro\'');","streamsProperties":{}}' ``` I get error: ``` [{"error":{"statementText":"CREATE STREAM kawabanga_stream (log_id varchar, created_date varchar) WITH (kafka_topic = 'kawabanga', value_format = 'avro');","errorMessage":{"message":"Avro schema file path should be set for avro topics.","stackTrace":["io.confluent.ksql.ddl.commands.RegisterTopicCommand.extractTopicSerDe(RegisterTopicCommand.java:75)","io.confluent.ksql.ddl.commands.RegisterTopicCommand. ``` Please find belowe my ksql server config: ``` # cat /etc/ksql/ksqlserver.properties bootstrap.servers=kafka-0.kafka-hs:9093,kafka-1.kafka-hs:9093,kafka- 2.kafka-hs:9093 schema.registry.host=schema-svc schema.registry.port=8081 ksql.command.topic.suffix=commands listeners=http://0.0.0.0:8080 ``` Also I tried to start server without schema.registry strings but with no luck<issue_comment>username_1: According to the [KSQLConfig class](https://github.com/confluentinc/ksql/blob/master/ksql-common/src/main/java/io/confluent/ksql/util/KsqlConfig.java#L42), you should use the `ksql.schema.registry.url` property to specify the location of the Schema Registry. This looks to of been the case since at least v0.5 onwards. It's also worth noting that using the RESTful API directly isn't currently supported. So you may find the API changes between releases. Upvotes: 1 <issue_comment>username_2: You must set the configuration `ksql.schema.registry.url` (see [KSQL v0.5 documentation](https://github.com/confluentinc/ksql/blob/v0.5/docs/syntax-reference.md#create-stream)). FYI: We will have better documentation for Avro usage in KSQL and for Confluent Schema registry integration with the upcoming GA release of KSQL in early April (as part of Confluent Platform 4.1). > > I don't know how to check version, but I'm using docker image confluentinc/ksql-cli (I think it's 0.5). For Kafka I used Confluent Kafka Docker image 4.0.0 > > > * The KSQL server shows the version on startup. * If you can't see the KSQL server startup message, you can also check the KSQL version by entering "VERSION" in the KSQL CLI prompt (`ksql> VERSION`). Upvotes: 2
2018/03/17
726
2,462
<issue_start>username_0: How would I create a lambda expression for a task that returns a string? This is what I have tried but I get an error. Thank you for any help. ``` public static async Task GetStringAsync(string path) { try { var task = new Task(async () => { var response = await Client.GetAsync(path); var responsestring = await response.Content.ReadAsStringAsync(); return responsestring; }); return await Task.WhenAny(task, Task.Delay(20000)) == task ? task.Result : RequestTimeOutMessage; } catch (Exception e) { return e.GetBaseException().Message; } } } ```<issue_comment>username_1: [You should never use the `Task` constructor](https://blog.stephencleary.com/2014/05/a-tour-of-task-part-1-constructors.html). There are literally no good reasons to use it. Your problem can be naturally expressed as a separate method: ``` public static async Task GetStringAsync(string path) { try { var task = DoGetStringAsync(path); return await Task.WhenAny(task, Task.Delay(20000)) == task ? await task : RequestTimeOutMessage; } catch (Exception e) { return e.GetBaseException().Message; } } private async Task DoGetStringAsync(string path) { var response = await Client.GetAsync(path); var responsestring = await response.Content.ReadAsStringAsync(); return responsestring; } ``` If you *really* want an `async` lambda expression (personally, I think it obfuscates the code unnecessarily), you can move the separate method inline and assign it to an [asynchronous delegate type](https://blog.stephencleary.com/2014/02/synchronous-and-asynchronous-delegate.html): ``` public static async Task GetStringAsync(string path) { try { Func> func = () => { var response = await Client.GetAsync(path); var responsestring = await response.Content.ReadAsStringAsync(); return responsestring; }; var task = func(); return await Task.WhenAny(task, Task.Delay(20000)) == task ? await task : RequestTimeOutMessage; } catch (Exception e) { return e.GetBaseException().Message; } } ``` On a side note, I recommend using exceptions for timeouts as well as communication errors, rather than special strings. Upvotes: 5 [selected_answer]<issue_comment>username_2: This is the "simplest" way I'm aware of: ``` //Without result var task = ((Func)(async () =>{ await Task.Delay(100); }))(); //With result var task2 = ((Func>)(async () =>{ await Task.Delay(100); return "your-string"; }))(); ``` Upvotes: 2
2018/03/17
2,306
7,497
<issue_start>username_0: I've tried a few variations of CSS star ratings via different methods, and am trying to implement the following via FontAwesome rather than using a sprite. I want to be able to include half stars ideally, but this is where the example below is failing. This is what I've tried so far. I can't get the half / partial star to work correctly here. Any pointers greatly appreciated! ```css .score { display: block; font-size: 16px; position: relative; overflow: hidden; } .score-wrap { display: inline-block; position: relative; overflow: hidden; height: 19px; } .score .stars-active { color: #EEBD01; position: relative; z-index: 10; display: inline-block; } .score .stars-inactive { color: grey; position: absolute; top: 0; left: 0; -webkit-text-stroke: initial; overflow: hidden; } ``` ```html ```<issue_comment>username_1: The overflow:hidden needs to be on 'stars-active' (the sized element) instead of 'score-wrap' (which never overflows.) You can use `white-space: nowrap` to prevent the stars from wrapping to the next line within the hidden-overflow container. ```css .score { display: block; font-size: 16px; position: relative; overflow: hidden; } .score-wrap { display: inline-block; position: relative; height: 19px; } .score .stars-active { color: #EEBD01; position: relative; z-index: 10; display: inline-block; overflow: hidden; white-space: nowrap; } .score .stars-inactive { color: grey; position: absolute; top: 0; left: 0; -webkit-text-stroke: initial; /* overflow: hidden; */ } ``` ```html ``` Upvotes: 5 [selected_answer]<issue_comment>username_2: I found this solution by Paales here: <https://github.com/FortAwesome/Font-Awesome/issues/717> I think it's an elegant solution. It looks comparable to your code because the full stars overlap the empty stars and by using `overflow: hidden` and `position: absolute`. Now you can set the width of the full stars and show partly filled stars. If you want to show half stars you could change the width of the absolute positioned element with 10% increments. ```css .rating-box { position:relative; vertical-align: middle; font-size: 3em; font-family: FontAwesome; display:inline-block; color: #F68127; } .rating-box:before{ content: "\f006 \f006 \f006 \f006 \f006"; } .rating-box .rating { position: absolute; left:0; top:0; white-space:nowrap; overflow:hidden; color: #F68127; } .rating-box .rating:before { content: "\f005 \f005 \f005 \f005 \f005"; } ``` ```html ``` PS: username_1 already gave you the answer about the mistake you made regarding `white-space: no-wrap`, so I suggest accepting that answer. I just wanted to share this solution because I think it is a very nice alternate approach. Upvotes: 4 <issue_comment>username_3: I find myself doing with two sets of stars styles solid and regular. Solid set declared first to have higher z-index and overlapped Regular sets of stars controlled by hidden, nowrap and width. ```css .stars-rating { position: relative; display: inline-block; } .stars-rating .stars-score { position: absolute; top: 0; left: 0; overflow: hidden; width: 20%; white-space: nowrap; } ``` ```html ``` Upvotes: 1 <issue_comment>username_4: For future reference, FontAwesome have added half stars to their arsenal. Make sure to include in the section of your page, and then the following will give you the basic structure. ``` ``` If you want the stars to appear bigger, edit `fa fa-star` to be `fa fa-star 3x` where 3x indicates the size, so 3x, 4x, 5x etc. If you would like a different style of star, use `fa-star-o`. The default colour is black, so if you would like to change the colour you can either edit it via CSS or add the colour to the class, i.e `class="fa-star-o bg-light"` Upvotes: 1 <issue_comment>username_5: This is very nice, thanks for sharing. I made the same tweak as Faraz, switching to ``` ``` in the stars-active span, and ``` ``` in stars-inactive, as it bothered me that stars above the rating % didn't show. Upvotes: 1 <issue_comment>username_6: below is the Font Awesome 5 version of [@Rob's answer above](https://stackoverflow.com/a/49343426/6908282): We can use `font-family: "Font Awesome 5 Free"; font-weight: xxx;` to switch between solid and outline You can play around with the code at the [following link](https://stackblitz.com/edit/css-rating?file=style.css). ```css /* rating box with fontawesome 5 inspired from - https://stackoverflow.com/a/49343426/6908282*/ .rating-box { position: relative; vertical-align: middle; font-size: 3em; /* comment/edit this to change size */ font-family: FontAwesome; display: inline-block; color: #F68127; } .rating-box:before { font-family: "Font Awesome 5 Free"; font-weight: 400; content: "\f005 \f005 \f005 \f005 \f005"; } .rating-box .rating { position: absolute; left: 0; top: 0; white-space: nowrap; overflow: hidden; color: #F68127; } .rating-box .rating:before { font-family: "Font Awesome 5 Free"; font-weight: 900; content: "\f005 \f005 \f005 \f005 \f005"; } ``` ```html ``` Upvotes: 2 <issue_comment>username_7: ★★★⯪☆ Just use Unicode characters and then use <https://www.babelstone.co.uk/Fonts/Shapes.html> font as it has ⯪ (STAR WITH LEFT HALF BLACK) ``` ★★★⯪☆ ``` Given the characters (glyphs) above, lets say that ⯪ is rendered as ▯ or some other character that denotes that the glyph is missing from the font. The way you fix this is that you copy ⯪ (missing glyph) and then in google type ``` Unicode ⯪ ``` (you paste the glyph you are looking for). From the results you find <https://unicode-table.com/en/2BEA/> that says that the character is called "Star with Left Half Black" and that the code is U+2BEA. Next you take the 2BEA portion of U+2BEA and paste it into the URL below between /char/ and /fontsupport.htm <https://www.fileformat.info/info/unicode/char/2BEA/fontsupport.htm> The URL for your missing character lists all the fonts that contain that character. Now you can click "view" next to each font listed and find the one you like best. If you find multiple that you like equally well, you can download all of them and then pick the smallest one. You want to have a .ttf, .woff, and .woff2 version of the font you choose and then you can use it in CSS as follows: ```css @font-face { font-family: 'BabelStoneShapes'; src: url('/fonts/BabelStoneShapes.woff2')format('woff2'), /* Smallest file, if your browser can use this, it will stop here and not download the rest */ url('/fonts/BabelStoneShapes.woff')format('woff'), /* A larger file, if your browser does not understand woff2 */ url('/fonts/BabelStoneShapes.ttf')format('truetype');/* The largest of the three */ font-weight: normal; font-style: normal; font-display: block; } div.rating { font-family: BabelStoneShapes; } ``` and then use it in HTML as follows: ```html ★★★⯪☆ ``` Obviously, if Stack Overflow wanted to, they could make their website support virtually all Unicode fonts by using Symbola and updating it each time a new version came out. But they probably never will so in this "Answer" you won't see the half star, but if you follow the example, you will. Upvotes: 0
2018/03/17
615
1,992
<issue_start>username_0: I have an Excel file which includes lots of rows of information. I have actually a single problem which is I can't get the parent of each cells according to the information in the cell. It looks like this [![enter image description here](https://i.stack.imgur.com/bRYZR.jpg)](https://i.stack.imgur.com/bRYZR.jpg) In the image, you can see that A has no parent and its' children are A01 and AB and more and more like AC and AD. Is there any way for handling this issue with excel-formulas?<issue_comment>username_1: Assuming that your sample data is true to the format of all your data (there is either 2 numbers at the end of each parent or only an extra letter) then the following formula will work: Given formula is set to look at data in cell A1, you will have to drag and auto fill the formula down for all rows. ``` =IF(OR(RIGHT(A1,1)="0",RIGHT(A1,1)="1",RIGHT(A1,1)="2",RIGHT(A1,1)="3",RIGHT(A1,1)="4",RIGHT(A1,1)="5",RIGHT(A1,1)="6",RIGHT(A1,1)="7",RIGHT(A1,1)="8",RIGHT(A1,1)="9")=TRUE,LEFT(A1,LEN(A1)-2),LEFT(A1,LEN(A1)-1)) ``` It works by checking if the last character is a number (with this data excel treats it as text so we have to check for each number as if it is text), if it matches a number then show the parent minus the two right characters otherwise show the parent minus one character. Upvotes: 1 <issue_comment>username_2: Okey I think I found the answer. Here is my formula ``` =IF(LEN(B2)=1;"NULL";IF(LEN(B2)=2;LEN(B2;0);IF(LEN(B2)=3;LEFT(B2;1);IF(LEN(B2)=4;LEFT(B2;3);IF(LEN(B2)=5;LEFT(B2;4);IF(LEN(B2)=7;IF(B2;5))))))) ``` With this formula, I check the length of the characters in cell and get the first part of that string instead of deleting the last indexes because of there are also some string values. Because of there are some rules with my product codes, I figured out how they are changing and I got the part of the code by their sizes. Thanks for replies, they helped to find this solution. Upvotes: 1 [selected_answer]
2018/03/17
912
3,109
<issue_start>username_0: I cant get my second if statement to work like expected. It is adding data into the array even though it's validated as incorrect. For example: The console prompts; > > Enter Score: 1 for student: 1 > > > Then if 500 is entered, the following prompt appears: > > Please enter a value between 0 and 100. > > > Enter Score: 2 for student 1. > > > It is not holding it at "Score 1" for correct data to be entered. I don't understand why because the 1st if statement works like that, keeping the array at [0,0] until correct data is entered. ``` static bool IsInRange (int input) { return input >= 0 && input <= 100; } for (int studentIndex = 0; studentIndex < studentCount; studentIndex++) { for (int scoreIndex = 0; scoreIndex < scoreCount; scoreIndex++) { int parsedScore = -1; string score = string.Empty; while(!IsNumeric(score) && !IsInRange(parsedScore)) { Console.WriteLine("Enter score: {0} for student: {1}", scoreIndex + 1, studentIndex + 1); score = Console.ReadLine(); if (!IsNumeric(score)) { Console.WriteLine(string.Empty); Console.WriteLine("Please enter a numeric value."); continue; } parsedScore = Convert.ToInt32(score); if (!IsInRange(parsedScore)) { Console.WriteLine(string.Empty); Console.WriteLine("Please enter a value between 0 and 100"); } studentScores[studentIndex, scoreIndex] = parsedScore; } } } ```<issue_comment>username_1: Assuming that your sample data is true to the format of all your data (there is either 2 numbers at the end of each parent or only an extra letter) then the following formula will work: Given formula is set to look at data in cell A1, you will have to drag and auto fill the formula down for all rows. ``` =IF(OR(RIGHT(A1,1)="0",RIGHT(A1,1)="1",RIGHT(A1,1)="2",RIGHT(A1,1)="3",RIGHT(A1,1)="4",RIGHT(A1,1)="5",RIGHT(A1,1)="6",RIGHT(A1,1)="7",RIGHT(A1,1)="8",RIGHT(A1,1)="9")=TRUE,LEFT(A1,LEN(A1)-2),LEFT(A1,LEN(A1)-1)) ``` It works by checking if the last character is a number (with this data excel treats it as text so we have to check for each number as if it is text), if it matches a number then show the parent minus the two right characters otherwise show the parent minus one character. Upvotes: 1 <issue_comment>username_2: Okey I think I found the answer. Here is my formula ``` =IF(LEN(B2)=1;"NULL";IF(LEN(B2)=2;LEN(B2;0);IF(LEN(B2)=3;LEFT(B2;1);IF(LEN(B2)=4;LEFT(B2;3);IF(LEN(B2)=5;LEFT(B2;4);IF(LEN(B2)=7;IF(B2;5))))))) ``` With this formula, I check the length of the characters in cell and get the first part of that string instead of deleting the last indexes because of there are also some string values. Because of there are some rules with my product codes, I figured out how they are changing and I got the part of the code by their sizes. Thanks for replies, they helped to find this solution. Upvotes: 1 [selected_answer]
2018/03/17
662
2,159
<issue_start>username_0: I am using Picasso 2.5.2 to use it with OkHttp 2.5.0 The code in `onCreate()` is like below: ``` File mypath = new File(getFilesDir().getAbsolutePath()+"/"+ date + "image.png"); if (mypath.exists()) { Picasso.with(FullscreenImage.this).load(mypath).into(fullImage); Toasty.info(FullscreenImage.this, mypath.toString()).show(); } else { // Download the image from Firebase and create the image in the // same directory and name } ``` I see the image in the `files` folder in my app folder but it does not display in the image view. The image size: 1.4 MB<issue_comment>username_1: Assuming that your sample data is true to the format of all your data (there is either 2 numbers at the end of each parent or only an extra letter) then the following formula will work: Given formula is set to look at data in cell A1, you will have to drag and auto fill the formula down for all rows. ``` =IF(OR(RIGHT(A1,1)="0",RIGHT(A1,1)="1",RIGHT(A1,1)="2",RIGHT(A1,1)="3",RIGHT(A1,1)="4",RIGHT(A1,1)="5",RIGHT(A1,1)="6",RIGHT(A1,1)="7",RIGHT(A1,1)="8",RIGHT(A1,1)="9")=TRUE,LEFT(A1,LEN(A1)-2),LEFT(A1,LEN(A1)-1)) ``` It works by checking if the last character is a number (with this data excel treats it as text so we have to check for each number as if it is text), if it matches a number then show the parent minus the two right characters otherwise show the parent minus one character. Upvotes: 1 <issue_comment>username_2: Okey I think I found the answer. Here is my formula ``` =IF(LEN(B2)=1;"NULL";IF(LEN(B2)=2;LEN(B2;0);IF(LEN(B2)=3;LEFT(B2;1);IF(LEN(B2)=4;LEFT(B2;3);IF(LEN(B2)=5;LEFT(B2;4);IF(LEN(B2)=7;IF(B2;5))))))) ``` With this formula, I check the length of the characters in cell and get the first part of that string instead of deleting the last indexes because of there are also some string values. Because of there are some rules with my product codes, I figured out how they are changing and I got the part of the code by their sizes. Thanks for replies, they helped to find this solution. Upvotes: 1 [selected_answer]
2018/03/17
2,110
7,919
<issue_start>username_0: I am writing an Android app that display pizza places near your location to do that I am using a call to google map using URL: <https://maps.googleapis.com/maps/api/place/nearbysearch/json?location=>{latitude,longitude}&type=pizza&sensor=true&key={MY\_KEY} I am processing the data by extending AsyncTask and using methods doInBackground and onPostExecute. From the data I am getting and parsing I get the place id, now I want to call google again to get the place information phone, rating, is open ... (you do not get this data from the near by places information) I saw you can make a URL call to google using: <https://maps.googleapis.com/maps/api/place/details/json?key=>{MY\_KEY}&placeid={PLACE\_ID} but I do not want to call AsyncTask inside an AsyncTask. basically I want to call the first URL and while parsing each place get the extended information. How can i do that? my code is: public class MapGetNearbyPlacesData extends AsyncTask { ``` public final static int MAP_INDEX = 0; public final static int URL_INDEX = 1; private String googlePlacesData; private GoogleMap map; private String url; @Override protected String doInBackground(Object... objects) { this.map = (GoogleMap)objects[MAP_INDEX]; this.url = (String)objects[URL_INDEX]; try { this.googlePlacesData = MapDownloadURL.readUrl(this.url); } catch (IOException e) { e.printStackTrace(); } return this.googlePlacesData; } @Override protected void onPostExecute(String s) { List> nearbyPlaceList; nearbyPlaceList = MapDataParser.parseNearbyPlaces(s); showNearbyPlaces(nearbyPlaceList); } private void showNearbyPlaces(List> nearbyPlaceList) { for(int i = 0; i < nearbyPlaceList.size(); i++) { MarkerOptions markerOptions = new MarkerOptions(); HashMap googlePlace = nearbyPlaceList.get(i); String placeName = googlePlace.get(StringUtils.MAP\_PALACE\_NAME); String vicinity = googlePlace.get(StringUtils.MAP\_VICINITY); double lat = Double.parseDouble(googlePlace.get(StringUtils.MAP\_LATITUDE)); double lng = Double.parseDouble(googlePlace.get(StringUtils.MAP\_LONGITUDE)); String placeId = googlePlace.get(StringUtils.PLACE\_ID); //how can I get the extended place info here? LatLng latLng = new LatLng(lat, lng); markerOptions.position(latLng); //build all necessary information to display in info window. String title = new StringBuilder() .append(placeName).append(StringUtils.infoWindowSplitter) .append(vicinity).append(StringUtils.infoWindowSplitter) .append(placeId).toString(); markerOptions.title(title); markerOptions.icon(BitmapDescriptorFactory.defaultMarker(BitmapDescriptorFactory.HUE\_BLUE)); this.map.addMarker(markerOptions); } } ``` }<issue_comment>username_1: I can suggest using the Java client library for Google Maps API Web Services to execute the Places API requests in your AsyncTask. <https://github.com/googlemaps/google-maps-services-java> With this library you can execute nearby search, get places and loop through items and execute place details to get the most complete place data. You can add Java client library in your project via Gradle `dependencies { compile 'com.google.maps:google-maps-services:(insert latest version)' compile 'org.slf4j:slf4j-nop:1.7.25' }` To demonstrate how it works I created a simple example. Have a look at implementation of MapGetNearbyPlacesData class. ``` public class MapsActivity extends FragmentActivity implements OnMapReadyCallback { private GoogleMap mMap; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_maps); // Obtain the SupportMapFragment and get notified when the map is ready to be used. SupportMapFragment mapFragment = (SupportMapFragment) getSupportFragmentManager() .findFragmentById(R.id.map); mapFragment.getMapAsync(this); } @Override public void onMapReady(GoogleMap googleMap) { mMap = googleMap; mMap.getUiSettings().setZoomControlsEnabled(true); mMap.setInfoWindowAdapter(new GoogleMap.InfoWindowAdapter() { @Override public View getInfoWindow(Marker arg0) { return null; } @Override public View getInfoContents(Marker marker) { Context context = getApplicationContext(); LinearLayout info = new LinearLayout(context); info.setOrientation(LinearLayout.VERTICAL); TextView title = new TextView(context); title.setTextColor(Color.BLACK); title.setGravity(Gravity.CENTER); title.setTypeface(null, Typeface.BOLD); title.setText(marker.getTitle()); TextView snippet = new TextView(context); snippet.setTextColor(Color.GRAY); snippet.setText(marker.getSnippet()); info.addView(title); info.addView(snippet); return info; } }); LatLng center = new LatLng(41.385064,2.173403); mMap.moveCamera(CameraUpdateFactory.newLatLngZoom(center, 13.0f)); new MapGetNearbyPlacesData().execute(mMap); } private static class MapGetNearbyPlacesData extends AsyncTask> { private GoogleMap map; private String TAG = "so49343164"; @Override protected List doInBackground(GoogleMap... maps) { this.map = maps[0]; List options = new ArrayList<>(); GeoApiContext context = new GeoApiContext.Builder() .apiKey("AIza......") .build(); NearbySearchRequest req = PlacesApi.nearbySearchQuery(context, new com.google.maps.model.LatLng(41.385064,2.173403)); try { PlacesSearchResponse resp = req.keyword("pizza").type(PlaceType.RESTAURANT).radius(2000).await(); if (resp.results != null && resp.results.length > 0) { for (PlacesSearchResult r : resp.results) { PlaceDetails details = PlacesApi.placeDetails(context,r.placeId).await(); String name = details.name; String address = details.formattedAddress; URL icon = details.icon; double lat = details.geometry.location.lat; double lng = details.geometry.location.lng; String vicinity = details.vicinity; String placeId = details.placeId; String phoneNum = details.internationalPhoneNumber; String[] openHours = details.openingHours!=null ? details.openingHours.weekdayText : new String[0]; String hoursText = ""; for(String sv : openHours) { hoursText += sv + "\n"; } float rating = details.rating; String content = address + "\n" + "Place ID: " + placeId + "\n" + "Rating: " + rating + "\n" + "Phone: " + phoneNum + "\n" + "Open Hours: \n" + hoursText; options.add(new MarkerOptions().position(new LatLng(lat, lng)) .title(name) .icon(BitmapDescriptorFactory.fromBitmap(BitmapFactory.decodeStream(icon.openConnection().getInputStream()))) .snippet(content) ); } } } catch(Exception e) { Log.e(TAG, "Error getting places", e); } return options; } @Override protected void onPostExecute(List options) { for(MarkerOptions opts : options) { this.map.addMarker(opts); } } @Override protected void onPreExecute() {} @Override protected void onProgressUpdate(Void... values) {} } } ``` The result is [![enter image description here](https://i.stack.imgur.com/tXQd8.png)](https://i.stack.imgur.com/tXQd8.png) You can also find the complete sample project at GitHub: <https://github.com/username_1-so/so49343164> Do not forget replace API key with your's API key. I hope this helps! Upvotes: 0 <issue_comment>username_2: Please refer Google places APIs: ``` https://maps.googleapis.com/maps/api/place/nearbysearch/json?location="+lat+","+lng+"&radius=10000&type=PlaceType&key=************************* ``` For key read documentation : <https://cloud.google.com/maps-platform/places/> Upvotes: 2
2018/03/18
786
2,787
<issue_start>username_0: I'm trying to have a text input box that when you press keys on the keyboard it shows the ASCII codes underneath in a list. When you first click on the box the paragraph above is hidden. When you click away from the box I want the paragraph to reappear only if there is no list of codes underneath (nothing has been pressed or everything has been deleted). I'm trying to check the list with "if (element.childNodes.length == 0)" but it's not working. Can anyone tell me where I'm going wrong? ```html Press a key on the keyboard in the input field to get the Unicode character code of the pressed key. function myFunction(event) { var x = event.which || event.keyCode || event.charCode; if(x >= 65 && x <= 90 || x >= 97 && x <= 122 || x == 32){ var element = document.getElementById('ul'); var fragment = document.createDocumentFragment(); var li = document.createElement('li'); li.textContent = String.fromCharCode(x) + ': ' + x; fragment.appendChild(li); element.appendChild(fragment); }; }; function downFunction(event) { var y = event.which || event.keyCode || event.charCode; if( y == 8 ){ var element = document.getElementById('ul'); element.removeChild(element.lastChild); }; }; $('#text').click(function(e){ $('#press').hide(); }); $('#text').focusout(function(e){ var element = document.getElementById('ul'); if (element.childNodes.length == 0) { $('#press').show(); }; }); ```<issue_comment>username_1: The reason your code doesn't work as expected is because you're using [`childNodes`](https://developer.mozilla.org/en-US/docs/Web/API/Node/childNodes), which counts all child nodes of an element, including plain text. While there may not be any elements in your `#ul`, there is a text node. Note that: ``` ``` is differrent from: ``` ``` To remedy your issue, use [`children`](https://developer.mozilla.org/en-US/docs/Web/API/ParentNode/children) instead, which returns an element-only collection that won't be affected by how you space your HTML code. The difference can be seen in the following snippet. **Snippet:** ```js /* ----- JavaScript ----- */ var ul1 = document.getElementById("ul1"); var ul2 = document.getElementById("ul2"); console.log("ul1.childNodes: " + ul1.childNodes.length); console.log("ul1.children: " + ul1.children.length); console.log("ul2.childNodes: " + ul2.childNodes.length); console.log("ul2.children: " + ul2.children.length); ``` ```html ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: Try to change your condition as below: ``` element.childNodes.length === 0 || (element.childNodes.length === 1 && !element.childNodes[0].tagName) ``` By this, you can control if a child is an actual dom node with a tagName Upvotes: 0
2018/03/18
483
1,659
<issue_start>username_0: I'm trying to complete an img tag src using a filename stored in a mysql database. At the moment I have to load two images from this one MySQL query (They are chosen at random) moving forward however I will likely need to do more. This is my current query: ``` // Get random 2 $query="SELECT * FROM images ORDER BY RAND() LIMIT 0,2"; $result = $conn->query($query); while($row = mysqli_fetch_object($result)) { $images[] = (object) $row; } ``` My connection documentation is stored in a separate file and called earlier in the process (This has worked for other functions so don't think the issue is there) I then try to insert the image address later on like this: ``` ![](images/<?=$images[0]->filename?>) ![](images/<?=$images[1]->filename?>) ``` I've been staring at this and trying stuff for the best part of an hour so it's likely it's either an obvious and stupid problem or I've gone completely off track! Thanks in advance!<issue_comment>username_1: Try this: ``` ![](/images/<?php echo $images[0]->filename; ?>) ![](/images/<?php echo $images[1]->filename; ?>) ``` Upvotes: -1 <issue_comment>username_2: It might be easier to use `mysqli_fetch_assoc()` instead of `mysqli_fetch_object()`. ``` $query = "SELECT * FROM images ORDER BY RAND() LIMIT 0,2"; $result = $conn->query($query); while ($row = mysqli_fetch_assoc($result)) { $images[] = $row; } ``` But, that inline PHP does not look reliable at all. You will want to do it like this. ``` ![](images/<?php echo $images[0][)" style="width:100%" /> ![](images/<?php echo $images[1][)" style="width:100%" /> ``` Hope this helps. Upvotes: 0
2018/03/18
798
2,583
<issue_start>username_0: Hello there is the code ``` $bot.on("presenceUpdate", (oldMember, newMember) => { if(oldMember.presence.status !== newMember.presence.status){ const memberID = newMember.user.id; if(newMember.user.presence.status === "online"){ membersOnline.push(memberID); console.log("-------online--------"); console.log(membersOnline); }else if(newMember.user.presence.status === "offline"){ membersOnline.filter(e => e !== memberID) console.log("-------offline--------"); console.log(membersOnline); } console.log(`${newMember.user.username} is now ${newMember.presence.status}`); } }); ``` Resoults is that id not getting removed. > > -------offline-------- [] > > > -l--RACE--l- is now offline > > > -------online-------- [ '203287818330570752' ] > > > -l--RACE--l- is now online > > > -------offline-------- [ '203287818330570752' ] > > > -l--RACE--l- is now offline > > > -------online-------- [ '203287818330570752', '203287818330570752' ] > > > -l--RACE--l- is now online > > > [DOCS](https://discord.js.org/#/docs/main/stable/class/Client?scrollTo=e-presenceUpdate)<issue_comment>username_1: > > The filter() method creates a new array with all elements that pass > the test implemented by the provided function. > <https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/filter> > > > Method Array.prototype.filter() not change orginal array. So simple solution is: ``` membersOnline = membersOnline.filter(e => e !== memberID); ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: I believe the filter method returns an array so in your code you run the method but do not assign the returned value to a variable. Please see code below... ``` $bot.on("presenceUpdate", (oldMember, newMember) => { if(oldMember.presence.status !== newMember.presence.status){ const memberID = newMember.user.id; } if(newMember.user.presence.status === "online"){ membersOnline.push(memberID); console.log("-------online--------"); console.log(membersOnline); } else { if(newMember.user.presence.status === "offline"){ var newOnlineMembers = membersOnline.filter(e => e !== memberID); // add new variable for filtered array console.log("-------offline--------"); console.log(newOnlineMembers); // output new variable } console.log(`${newMember.user.username} is now ${newMember.presence.status}`); } }); ``` Upvotes: 0
2018/03/18
853
3,317
<issue_start>username_0: I've been facing a weird issue lately with my React App. I'm trying to parse a JSON object that contains arrays with data. The data is something like this: {"Place":"San Francisco","Country":"USA", "Author":{"Name":"xyz", "Title":"View from the stars"}, "Year":"2018", "Places":[{"Price":"Free", "Address":"sfo"},{"Price":"$10","Address":"museum"}] } The data contains multiple arrays like the Author example I've just shown. I have a function that fetches this data from a URL. I'm calling that function in componentDidMount. The function takes the data i.e responseJson and then stores it in an empty array that I've set called result using setState. In my state I have result as result:[]. My code for this would look something like this: ``` this.setState({result:responseJson}) ``` Now, when I've been trying to access say Author Name from result I get an error. So something like this: {this.state.result.Author.Name} I'm doing this in a function that I'm using to display stuff. I'm calling this function in my return of my render function. I get an error stating : TypeError:Cannot read property 'Name' of undefined. I get the same error if I try for anything that goes a level below inside. If I display {this.state.result.Place} or {this.state.result.Country} it's all good. But if I try,say {this.state.result.Author.Title} or {this.state.result.Places[0].Price} it gives me the same error. Surprising thing is I've parsed this same object in a different component of mine and got no errors there. Could anyone please explain me why this is happening? If I store the individual element while I setState in my fetch call function, I can display it. For example: ``` {result:responseJson, AuthorName:responseJson.Author.Name } ``` Then I'm able to go ahead and use it as {this.state.AuthorName}. Please help me find a solution to this problem. Thanks in advance!<issue_comment>username_1: > > The filter() method creates a new array with all elements that pass > the test implemented by the provided function. > <https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/filter> > > > Method Array.prototype.filter() not change orginal array. So simple solution is: ``` membersOnline = membersOnline.filter(e => e !== memberID); ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: I believe the filter method returns an array so in your code you run the method but do not assign the returned value to a variable. Please see code below... ``` $bot.on("presenceUpdate", (oldMember, newMember) => { if(oldMember.presence.status !== newMember.presence.status){ const memberID = newMember.user.id; } if(newMember.user.presence.status === "online"){ membersOnline.push(memberID); console.log("-------online--------"); console.log(membersOnline); } else { if(newMember.user.presence.status === "offline"){ var newOnlineMembers = membersOnline.filter(e => e !== memberID); // add new variable for filtered array console.log("-------offline--------"); console.log(newOnlineMembers); // output new variable } console.log(`${newMember.user.username} is now ${newMember.presence.status}`); } }); ``` Upvotes: 0
2018/03/18
914
3,197
<issue_start>username_0: I may be misunderstanding here. I have a node server running at `localhost:3000`, and a React app running at `localhost:8080`. The React app is making a `get` request to the node server - my server code for this looks like: ``` const cookieParser = require('cookie-parser'); const crypto = require('crypto'); const express = require('express'); const app = express(); app.use(cookieParser()); app.get('/', function (req, res) { let user_token = req.cookies['house_user']; // always empty if (user_token) { // if the token exists, great! } else { crypto.randomBytes(24, function(err, buffer) { let token = buffer.toString('hex'); res.setHeader('Access-Control-Allow-Origin', 'http://localhost:8080'); res.cookie('house_user', token, {maxAge: 9000000000, httpOnly: true, secure: false }); res.send(token); }); } }); app.listen(3000, () => console.log('Example app listening on port 3000!')) ``` I'm trying to set the `house_user` token, so that I can later keep track of requests from users. However, the token is not being set on the user (request from `localhost:8080`) - the `house_user` token is always empty (in fact, `req.cookies` is entirely empty). Do I need to do something else?<issue_comment>username_1: I just tried the code below (and it worked). As a reminder, you can just paste this in myNodeTest.js, then run `node myNodeTest.js` and visit <http://localhost:3003>. If it does work, then it probably means you're having CORS issues. **[EDIT] withCredentials:true should do the trick with axios.** ``` axios.get('localhost:3000', {withCredentials: true}).then(function (res) { console.log(res) }) ``` ``` const express = require('express') const cookieParser = require('cookie-parser') const crypto = require('crypto'); const port = 3003 app.use(cookieParser()); app.get('/', function (req, res) { let user_token = req.cookies['house_user']; // always empty if (user_token) { // if the token exists, great! } else { crypto.randomBytes(24, function(err, buffer) { let token = buffer.toString('hex'); res.setHeader('Access-Control-Allow-Origin', 'http://localhost:8080'); res.cookie('house_user', token, {maxAge: 9000000000, httpOnly: true, secure: true }); res.append('Set-Cookie', 'house_user=' + token + ';'); res.send(token); }); } }); app.get('/', (request, response) => { response.send('Hello from Express!') }) app.listen(port, (err) => { if (err) { return console.log('something bad happened', err) } console.log(`server is listening on ${port}`) }) ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Making my comment into an answer since it seemed to have solved your problem. Since you are running on `http`, not `https`, you need to remove the `secure: true` from the cookie as that will make the cookie only be sent over an `https` connection which will keep the browser from sending it back to you over your `http` connection. Also, remove the `res.append(...)` as `res.cookie()` is all that is needed. Upvotes: 1
2018/03/18
804
2,784
<issue_start>username_0: In function types, you can write ``` ()->()? ``` And it builds fine! What is the `()?`?<issue_comment>username_1: Yes. ``` ()? ``` means optional void. I used to write it as (Void)? Don't think optional and void are the same thing. They are absolutely not. Essentially: ‘void’ is a type, while null is a [special] value. This should be helpful in explaining it. [Swift - Optional Void](https://stackoverflow.com/questions/26837951/swift-optional-void) Upvotes: 1 <issue_comment>username_2: `() -> ()?` defines a higher order type, taking the *empty tuple* `()` as argument, and returning an `Optional` of the empty tuple, `Optional<()>` or, using the `?` sugar for `Optional`, simply `()?`. There is a typedef for the empty tuple named `Void`, so all the following are equivalent higher order types: ``` () -> ()? () -> Optional<()> () -> Void? () -> Optional ``` Example when applied to closures: ``` // c1 through c4 all have the same type, but use // typedefs (Void) or optional sugar (?) for variations. var c1: () -> ()? = { return nil } let c2: () -> Optional<()> = c1 let c3: () -> Void? = c1 let c4: () -> Optional? = c1 ``` Note though that `()` is also *an instance* of the empty tuple (in addition to being use as the *type* for it), meaning we could re-assign e.g. the mutable closure `c1` above to return the *value* `()`: ``` c1 = { return () } c1 = { return .some(()) } ``` On the other had, as `Void` is simply a typedef for the empty tuple *type*, we'd need to explicitly instantiate a non-optional (named) `Void` instance if we'd like to return "an empty tuple instance" without actually using the `()` value (and only using the `Void` typedef): ``` c1 = { let v: Void return v } ``` Generally declaring an non-optional immutable property without assigning a *value* to it is an error in Swift, but since the `Void` typedef, namely the empty tuple, can only hold a single value (`()`), the Swift compiler is seemingly smart enough to allow omitting assigning this only possible value to `v` above (or, a special case for the quite special empty tuple type). Finally, note that all non-optional `()`-returning closures or functions can omit the return value, and, in function declarations, omit also the type: ``` // Now studying the case of non-optional () return var c6: () -> () = { print("Implicit () return") } func f1() { print("Implicit return type, implicit () return") } func f2() { print("Implicit return type, explicit () return"); return () } func f3() -> () { print("Explicit return type, implicit () return") } func f4() -> () { print("Explicit return type, explicit () return"); return () } c6 = f1 c6 = f2 c6 = f3 c6 = f4 let emptyTupleInstance = c6() // "()" ``` Upvotes: 3 [selected_answer]
2018/03/18
482
1,883
<issue_start>username_0: I am making an app, and in that app, users login and I am storing their information; however, I have noticed that I don't have a users' password information after they register. Is it a good idea to store users' password when they register through Firebase? And is there a point where I will need their passwords? I want to make sure before I proceed further. Thanks!<issue_comment>username_1: You do not do that. Use the (awesome, amazing) Firebase authentication system. Click right here: [![enter image description here](https://i.stack.imgur.com/VSUcx.png)](https://i.stack.imgur.com/VSUcx.png) **on the left**, to see all the users - click **"Authentication"**. You never see / you cannot see their passwords. You don't handle or touch the passwords at all. In the Android or iOS app, you get the userid - and that's it. **The answer by @PeterHaddad shows perfectly how to do that.** That's the first and most basic step in any Firebase ios/droid app. In your data you'll have a "table" called probably "userData/" and that's where you keep all data about the user. (For example, you may store their address, real name, shoe size .. whatever is relevant in your app.) Note - FBase is so amazing, your users can also connect with other methods (phone, etc). For your reference in the future, that is explained [here](https://stackoverflow.com/a/48692393/294884) Upvotes: 5 [selected_answer]<issue_comment>username_2: You don't need to store the password in the firebase database, after you authenticate the user using `createUserWithEmailAndPassword` the email and other info will be stored in the authentication console. So you do not need the password in the database, all you need is the userid to connect auth with database. ``` FirebaseUser user=FirebaseAuth.getInstance().getCurrentUser(); String useruid=user.getUid(); ``` Upvotes: 3
2018/03/18
807
3,330
<issue_start>username_0: I understand that react doesn't update state immediately then can someone tell me how i can log this state synchronously as my state is not a boolean and this answer didn't help me [setState doesn't update the state immediately](https://stackoverflow.com/questions/41278385/setstate-doesnt-update-the-state-immediately). Also i don't understand why after clicking on prev button it increments the value first and then it decrements ``` class A extends Component { constructor(props) { super(props); this.state = { value: 1 } } handleNext() { this.setState(prevState => ({ value: prevState.value + 1 })); console.log(this.state.value); } handlePrev() { if(this.state.value > 1) { this.setState(prevState => ({ value: prevState.value - 1 })); } console.log(this.state.value); } render() { this.handlePrev()}>Prev this.handleNext()}>Next } ```<issue_comment>username_1: The second argument to `setState` is a callback that executes after the state has updated. ``` handleNext() { this.setState({ value: this.state.value + 1 }, () => ({ console.log(this.state.value); }); } ``` From [the setState docs](https://reactjs.org/docs/react-component.html#setstate): > > The second parameter to setState() is an optional callback function that will be executed once setState is completed and the component is re-rendered. Generally we recommend using componentDidUpdate() for such logic instead. > > > Upvotes: 2 <issue_comment>username_2: You code is fine, try printing `this.state.value` in your render function. Example: ``` class A extends Component { constructor(props) { super(props); this.state = { value: 1, }; } handleNext() { this.setState(prevState => ({ value: prevState.value + 1, })); } handlePrev() { if (this.state.value > 1) { this.setState(prevState => ({ value: prevState.value - 1, })); } } render() { const { value } = this.state; return ( { value } --------- this.handlePrev()}>Prev this.handleNext()}>Next ); } } ``` It seems like your `handlePrev` is incrementing then decrementing because you're constantly printing the previous state. So when you decrement you are printing the result of the previous increment. ``` |---------------------|-------------------------| | Current State | Your console.log | |---------------------|-------------------------| | 1 | | init |---------------------|-------------------------| | 2 | 1 | increment |---------------------|-------------------------| | 3 | 2 | increment |---------------------|-------------------------| | 2 | 3 | decrement |---------------------|-------------------------| ``` Upvotes: 1 <issue_comment>username_3: `setState` takes a callback as its second argument. ``` handleNext() { this.setState(prevState => ({ value: prevState.value + 1 }),() => console.log('updated', this.state.value)); } ``` Upvotes: 2